 I'm Dmitriy Login, I'm a research fellow now in School of Computing at NUS, a National University of Singapore. Previously I was doing my PhD there and my focus was on the performance of computer systems. However today I'm going to talk about something completely different, not about my research, but about my hobbies and what I'm doing in my spare time. So, let's see if the... Okay, so this is a video. So this is the the first part of my presentation is about the lazy switch and the motivation behind it is that I am very lazy. My wife is also very lazy. So for example if we are watching TV or we are ready to sleep it's very hard to go to the switch and turn off the light. So we wanted to have something like maybe use our handphone which is closer to us or just say something to a personal assistant to turn off the light. And of course it's another motivation that it's a fancy thing to do this and I can tell you that all your friends will be impressed when you have this thing at home and it's fun for people working in computer science or electronics. It's fun tinkering with IoT devices, configuring software and also doing some coding. So this is actually the architecture of my smart home and maybe it looks a bit complicated but I'll try to explain. At the edge of it there are these smart switches that are able to control lights or other electric and electronic devices and also integrate sensors. And in this in my case I design a device called the lazy switch and I'm going to give you some details later. Of course you need to interact with this and for this you can use the personal assistant such as Amazon Alexa running on Echo or Echo Dot Google Assistant or the open source Microsoft AI and the mark system there. And you have to glue them together and you need like a controller to connect all these devices. And of course now many of our homes have cameras around so in the second part of my presentation I'm going to talk about the project on how to use these cameras to make the house smarter. But let's take them one by one and start with the lazy switch. Basically I developed this the lazy switch because I was not very happy with the commercially available smart plugs out there. In my opinion they have a big issue. One smart plug and control a single device. They only offer you a single plug. And this is a waste of Wi-Fi connection or Bluetooth connection and also a waste of space. So and of course they are also not really open source. I mean you could hack them but it's better if you can do your own. So that's why I designed this lazy switch which is actually a device that can control multiple plugs switches and can also integrate sensors as you can see in the diagram there. So at the heart of it is basically a microcontroller. It has a wireless connection like a Bluetooth or Wi-Fi and it can control multiple switches which are like relays components and also integrate multiple sensors. And actually a single lazy switch can cover the entire house. If the house is designed in such a way that all the wires come together into a single control panel. However in my case I was not so lucky so I had to hack all the plugs and you can see here that I put a lazy switch for this plug here and also added some switches for the lights. So in this case here I have the microcontroller inside and I can control five relays. I have a Bluetooth connection to the server and I also have a temperature sensor. And there I have another three switches that are controlling the lights and they are linked to this one here. So however I have to warn you don't try this at home because actually you are handling mains voltage which can be fatal. So maybe first just try with lower voltage like 5 volts, 12 volts something like that. Okay so this is the lazy switch. Now let's go to the server system. For this I just used the open source project called Home Assistant. It's available on github and it's coded mostly in Python. Because I have this lazy switch I had to integrate it with the Home Assistant. It's actually quite easy. You can do it in like one day or something like that. The beauty of this Home Assistant is that it already comes with support for many many other devices like the edimax smart plug TP-Link smart TVs and so on. And there is a quite active community around it. So it also comes with a web interface from which you can control your devices. Maybe at the end I will show you my interface back home. Okay of course this server system has to run on some platform and actually it can run on Raspberry Pi for example which is low cost and also uses very very low power. So you can add this to your house and actually it won't use so much energy. In my case actually I didn't use a Raspberry Pi. I'm using an NVIDIA Jet Sandboard which is a more powerful development kit that has also GPU and I will show you why I use that later. Okay next is the personal assistant part to control this smart home using voice. So there are few choices out there like proprietary software and hardware like Google Home Mini with Google Assistant Amazon Echo Dot. But there is also this Microsoft AI open source project. However I have to admit that I tried it on my laptop but I didn't really configure it to connect with my smart house. This is a this is a work in progress. However I think it's a very interesting project. It's also written in Python available on GitHub. One part of it I think it's written in C, C++ is the text to speech part. Okay now in the second part I told you that I'm going to talk about how to integrate the cameras around the house and make the house more intelligent. So it started one morning where I didn't know where my wife was and I was very lazy to get out of the bed so I was wondering is there a way to ask the personal assistant where is she or who is at home. Okay in my test I'm going to use this Jon Snow. I tried it with my name on Google Home Mini but apparently it cannot recognize my name. So that's why I had to change it to something. Yeah actually there is a typo there should be without age. Yeah so and to link it with the previous presentation you saw something about Dialogflow. So to implement this actually I used Dialogflow to interpret the sentences on Google Cloud. Another use case is to maybe there are some sensitive commands so you need to be authenticated. What we have seen with Google Home Mini and Amazon Echo Dot is that it can take commands from different users and it doesn't really know from whom it took the command. Actually I think lately they have implemented some kind of parts where they can distinguish the voice but in my case I was thinking what if I can do it with a camera. So the smart assistant has a camera, takes your photo, recognizes your face and then it can allow or deny your command. And the third part is apparently not connected to the others but in order to implement the previous two parts you need to tell the system that oh this is my face, this is another family member's face. So you need to tell him which are the user's faces. And currently we have lots of photos right. So what if we have a smart photo gallery that is where you are able to label your faces and also link it with the authentication and user tracking part. And now there are of course there are Google photos and other solutions that are using cloud but like with the latest Facebook scandal right. We don't want to put our photos anymore on the cloud and maybe we want to keep them inside our house and keep them at the edge. So there is this new paradigm is called edge computing where everything is done at the edge of the network in this case in the house rather than going to the cloud. So this will be like a third use case where this software is able to recognize faces, objects and organize your photos and also connect to the other two. So I started this project called Faceful, a smart photo gallery at the edge, which has like two main modules, the smart gallery and the smart face integration, which is able to run entirely at the edge inside your house. So actually I put it on GitHub and it's written mostly on Python. Its architecture is modular. I split it in two parts, a web server and an image processing server connected through sockets. Basically these two components can run on the same machine or they can run on two different machines. In my case I tested it with two jets and boards but you can also run it on a single board. When I split this in two I had in mind something like what if the web server is running on a small Raspberry Pi, which is not so powerful, in doing image processing. While the image processing part is running maybe on a more powerful platform like Jetson or PC with GPU. And the image processing part is based on a face net project and also on TensorFlow models. And you can see here Faceful in action with Jon Snow, right? It can recognize his face and here actually you can see the box number one. You can actually enter a label and update the database with this label. Then for example you have 10 photos of you. You can label them with your name. After that you can run a training phase where the system will be trained to recognize your face and then in all the other photos it can recognize your face. And then you can do the face form. And the second... okay it's just the last slide this one. Okay and yeah that's it, thank you. No actually I tested it with only two or three. Yeah one in the kitchen and two in the living room I think.