 So, my name is Shigong. I work at the National University of Singapore as a research assistant. And we do a lot of the Internet of Things research. So, what I'm going to be describing here is one of the frameworks that we have built as a part of our research and how we've been using it in making Internet of Things research easier and making it more usable by general people. I mean people who don't really have the tech knowledge of it. I would like to start with the video of a demonstration that we had made. What you'll see in the video is that I go to a room, I tap on an MC card. It automatically sets up the environment for me like all my IoT devices like the light bulb and my Apple TV and all that. And then something interesting happens that you can see in the video. So I'll show that first. So now as I tap on that card, my room gets set up for me. So there's an Apple TV behind me that you can't really see right now. Then there are these lights that are also controlled and available on the Wi-Fi. Then there's that drone that works on Bluetooth connectivity. And there's this small little ball thingy. It's called a Sphero. And that also talks to the phone at the same time. So now the idea here is that I want to be able to use mutually incompatible devices to control other things. So right now you will never be able to control a drone with a Sphero ball. Or you will not be able to control an Apple TV with a Sphero ball. So here what you're going to do is that the Sphero ball is taken as the remote controller. And everything that I tap the phone to will be controlled by the Sphero then. So now I tap the Apple TV and now the ball gets hooked up with the Apple TV. Now if I do like this it plays in both of the video. And at the same time you can see that the lights are also connected due to the same shadow. So every time the video plays the lights dim and every time the video pauses the lights turn on automatically. Now you might have missed but I use the same Sphero. Like one of the orientations of this Sphero actually moves the video forward and the other orientation also moves the video backward. So now I want to be able to control the drone with the same ball. So what I do is I tap the drone. This is the MRT tag if you guys are visiting the court for some time. So I pick up the ball, once I tap on the ball it turns on the drone. And then as I, what the orientation of my ball is the drone is also littering the same IMU. So this is one of the demos and one of the papers that we had published at NUS. So I just wanted to begin with the video because then it helps to make things, helps it easier to understand what I'm going to talk about. So yeah, my name is Shivam. I work at the Felicity's Computing Institute at the University of Singapore. And that's actually my, since I stole these lights from my professor so that's his name. And I just didn't know how to get rid of it so it's there. So like everybody is now trying to build context-aware systems. Like you want to personalize everybody's experience as and when they move in all sorts of environment. But the problem is that every new device that comes up in the market is using a different protocol. It's using some other type of channel that you need to connect. It comes up with another proprietary API or SDK that you have to write software for to be able to communicate with it. So what we did was we created a framework called Ambient Dynamics that helps you wrap all these proprietary SDKs into dynamic plugins. So also, now you can write plugins for all sorts of things. Like there's an indoor positioning thing. There's external sensors that are deployed anywhere. Then as you saw in the video that we have written a drone plugin that if you send commands, you can send commands to your drone using that. Then there's the Sphero plugin that helps you access data from the Sphero. Then there's an Apple TV plugin that helps you talk to Apple TV and their UBMP devices. So what Dynamics does is that it allows you and gives you a very easy way to actually talk to these devices. Without knowing how exactly you talk to them. Like you don't actually have to get into the specifics of how the Apple TV works or how the Sphero works. Since we try to provide these high level commands that you can send requests to or try to access. And get data back from the devices as well. One of the most interesting parts of Dynamics is that the plugins that we write are actually dynamically deployed to Dynamics. So like once I try to access the Sphero, I just make a request as a website or an Android app that I want to access the Sphero. So give me the access to Sphero. So what it does is that it searches for Sphero in a repository of plugins and then it deploys it dynamically. So anybody who actually wants to be able to use it, you don't actually need to have the plugin redeployed or be a part of the app beforehand. So one of the good things about this is that anybody can write a plugin. And as long as I have the URL to that plugin and I know what the idea of that plugin is, if you want to develop something, you just put that plugin out there and I want to use it. So I just make the request and it will get deployed automatically. I don't have to update the software all over again or try to update the Android application that I have running. So this is kind of like the technical part of it. As a plugin developer or as an application developer, we only need to use the top part of it, the Android application and the web browser part of it. So we have a couple of SDKs. If you want to develop a Python application that uses Dynamics capabilities, we have the Python SDK, if you want to do a website that uses Dynamics capabilities, you can use the Dynamics JavaScript SDKs. And similarly, we have SDKs for you to be able to write Android applications. Now the cool thing about this is that all these different platforms use the same plugin. So even if I want to access this URL on a website, I can just use my Dynamics SDK and I ask for the same thing that I want to use the Sphero, or I want to control the light bulb, or I want to play a music on some sort of Bluetooth speaker. All of this can actually be done through the Dynamics application. So similarly, we have all these, the JavaScript SDK allows you to connect these plugins. One of the things that we had actually developed for one of the demos was that you could go to one of these web pages, give it access of your Facebook albums, and then it retrieves your images. And then from the web page directly, you can start playing these images onto your Apple TV. So you don't actually have to go there, install your software, or try to even find out how I can access or send these images from my Facebook to the Apple TV. Or any other UPNP devices. So these are some of the features of our framework. So Dynamics runs as a lightweight background service. It's not at all heavy. It uses an OSGI mechanism. OSGI is a way of developing plugins that can be deployed on the runtime. It's for native apps and browser based web apps, wife apps. All the data that is actually passed between the application and the framework is a plain old Java object. So you don't need any special sort of dependencies to be able to access it or try to parse it. We also have the plugin Sandbox. So once you write the plugin, you define what sorts of permissions it has. So anybody who, whenever some web app or an Android app actually requests the support for a particular device, Dynamics will pop up and ask you, hey, this particular guy is trying to access your media player. Do you want to allow it? Now that has a couple of options. Like always allow that guy or you can allow it for once or just block him or whatever. And the plugin Sandbox actually helps you maintain that. So how can you actually develop these plugins? So use our SDKs to develop plugins on whatever proprietary devices and software hardware you have. And just put it out there. It's all open source. And adding, requesting support for a particular type is as easy as doing this. You just do Dynamics or add the tech support. This is the, we had a barcode scanner plugin. So what it does is that if you want to be able to scan barcodes, you just do this in your application or in your web page and you get access to that. Now once you allow it, you can make requests to it to scan barcodes. Once you do that, it'll pop over the camera. And what you get as a result will be given back to you in the listener that you provide when you're trying to add the tech support. So this is another plugin that we have written. This uses the light sensor on your phone. So web pages can now generally not be able to access the light sensor, right? For that matter, any other hardware that is available on your device. So you just add support for lots value. And then once the user approves of it, you start getting context events back from the Dynamics application to whatever application that you've written. And now you can use it the way you want. So this is just an image of the Facebook thing that we have developed. There's a share screen button. So once you turn it on, it searches for all the UPNP and Apple Media devices that are around you. And once you do that, you can start. You get a list of all the available devices. Now you select the ones that you want to throw the image to. And once you do that, when you click on the photo, it goes straight to the media render that you have. These are some of the plugins that we have, like step detection, Apple Media, devices covering barcode scanner, power trades, sleep state plugins, speech recognition plugins, all sorts of stuff. Right now we have developed activity recognition plugins. So we have hundreds of plugins right now. We try to get into whatever, as when new devices come into the market, like new IoT devices, like an ARM band. You can see an ARM band that is there right now. So it recognizes all these gestures that you make. Every time you do this, it will tell you that this was done. So we have a plugin that helps you recognize that. And you can start to use these things and interconnect it and try to control mutually incompatible devices seamlessly. It's amazing. There's one more video to finish it off. I'll show you. So what we're going to see in this video is that we started off with tap to interact. So you could tap on things and talk to each other. But it's a little bit difficult because if you have like 15 devices, you're not going to start tapping everything in the room. So what we did was that we developed a framework called ambient flow which is based on ambient dynamics and the plugins. And what it does is that it helps you design your smart space. We use it as a smart space designer. We think that in the future, maybe 10 years down the line or 15 years down the line, there will be people who just have this job of designing your ambient space for you. So this is a demo of that. You can see a video. So we start with a new graph. I want to start with, I want to control something or I want to create a situation where I control, use this variable to do something. So I add this zero plugin on this side. Now this zero plugin has a couple of inputs and a couple of outputs, right? So in this case, it has the collision sensor. Like once you tap on it, it has the IMU values, like the accelerometer values, the pitch your role, and it also has a light inside it. So you can detect that and you can change that color as well. Now what I want to do is I want to control this light bulb from that stereo. So every time I change the orientation of the light bulb, of the stereo, I want the color of the light bulb to change as well. So what I'll do is I take the light bulb from that side, then I add it here, then I drag the output of this guy into the light bulb. Now I save this graph. Now the idea here is that this is a generic graph. Anybody who actually has a stereo and a light bulb can use this. Like just load this graph, scan, there will be a power cord that will pop up, scan on your phone and then it gets automatically deployed in your environment. So what I do now is I scan this power cord which loads the graph on my phone. I can see this parent so that whenever I want to use it again, I can just enter my password. So now this is setting it up. So now as the orientation of the stereo bulb changes, the color changes. Now what I want to do is that I want to have the same color that the light bulb is, be the color of the stereo that I have. So what I do is I take the output of this guy and then drag it back to my stereo. And then I do the same thing, play it on the phone. Now I mean it might not really look like it because the light bulb is super bright, but it actually has the same color. It's getting fed back into the stereo. So now this is a pre-made graph that we had. Like this is what we hope it will be like for ambient designers. That you are trying to have this environment pre-configured by somebody else and then you just come into there and start using it. So in this case, we have a Vivo plug-in which is a motion sensor. We have a stereo plug-in for the bulb. We have the ambient media plug-in. Then we have the light plug-in. And all these are, what you see here are translators. So what these translators do is that they translate the output of one device to another device. I mean because ideally you would not know how to translate IMU data to light color, right? So we have all these translators that can do that for you. So now I play something in the phone and you'll see that the light dims because the movie is playing. The motion sensor is super, super sensitive so it kind of like metals with what we are finally doing. So similarly, you pick up the bulb and then use it as a remote control for the Apple TV. So this was our latest publication that we had. It was called IMU Flow. Check it out on our website or something. Alright, thank you. This was my presentation. Thank you.