 Good afternoon everyone. I'm here to tell you about a personal project that I built over the past few months, which is essentially my own smart glasses. Now most of this talk will be based from the point of view from a software developer because that's in general what I am. But what I also want to show you is that I was capable of building this using simple hardware that's available right off the shelf. Well first off I just want to have a very quick introduction as to what smart glasses are. Now the most well-known ones are the Google Glass. These are essentially just glasses which essentially act as a small display right here, which can show you a heads-up display which might give you something that looks a little bit like this. A very quick show of hands. Who here has ever tried smart glasses, something like a Google Glass? Just a couple of hands. For me the first time when I got into smart glasses was when I was working for a small startup company in Holland called GemVision. They offer a solution where remote workers are able to showcase what they're seeing to people at the headquarters. And then from the headquarters they're capable of communicating with a remote worker and give them instructions on what to do. And something is crashing of course, I'm sorry. Let me just very quickly see if I can get this to work again. Yep, there we go. So essentially when you're wearing smart glasses it looks a little bit like this. You can just look around and you have a small heads-up display here in the upper right or below. Now the first thing that you may think is that this is something that will get annoying really quick and that the display itself will be in the way all the time. But the fact of the matter is that it's just a little bit outside of your peripheral vision. So for me it's very easy to just look past the screen and then whenever I want to look up any information I just in this case have to look down and get the information that I need. So what we're working on at GEMVision is... Well, out there. It's infom... Alright, I'm just going to try to do this as quickly as possible or just back and forth. So at GEMVision when people were wearing the glasses their camera was being streamed to another web application using WebRTC. So then via microphones someone at the headquarters could give them instructions on what to do or they could draw in the feed itself. So for example to give them instructions like in this case look at this television over here. The smart glasses that they were using were called the VUZIX M300 which runs on Android and personally I didn't do any work on the smart glasses themselves. I only worked on the web application but I really wanted to give this a try because in my opinion I think smart glasses are going to be a thing of the future. Not in this form but once they get a little bit more like normal glasses people will want to use them. And because these glasses were working on Android I was hoping that I could build something using Cordova. Now unfortunately I wasn't allowed to. The smart glasses that they had over there were reserved for their clients so they really just didn't want me prototyping or playing around with them in case they would break. Well that would basically mean that it would cost a lot of money for the company and for the startup that's the last thing you would want. And I would admit this kind of bumped me out a bit because I just really wanted to play around with this but if I wanted to have my own smart glasses I would have to pay at least 1,000 euros which for me is just a little bit too much to pay for something that I just want to prototype with. But then last year I suddenly started thinking I was wondering could I build something like this myself. So I essentially just decided you know what screw it I am going to do it myself. And for this I essentially needed three particular pieces of hardware. Now the recipe that I have for it the first one is you want a view find. Now the view find is the heads up display that I'm wearing here right now. It's nothing what it essentially is nothing more than a second display. You can hook it up to anything like a computer or a camera or a mobile and it will essentially act as a second display either mirroring the output or if I connect it up to my laptop it will basically act as a second screen. The initial approach I wanted to take was to basically build some web applications and then stream it from my mobile to the glasses but the issue I had here was that it wasn't really ideal because the screen would have to be on at all times. And I also wouldn't be able to touch it if I would want to put it away. So I realized that I needed another component and for that I wanted to step over to a Raspberry Pi. Now I assume most people here have played around with a Raspberry Pi or heard of it with your fans. Yeah expected as much. So yeah Raspberry Pi it's just a really cheap computer running on Raspbian just sort of a lineage like environment and at the moment I just have attached to my glasses right here on my left. It's incredibly small like almost the size of a credit card size these days and they're just really powerful that they also have Bluetooth and Wi-Fi, blah blah blah. And finally to get the whole thing up and running I needed nothing more than a simple power bank because the Raspberry Pi itself can run via a mini USB connector and it runs at very low power so when I have it hooked up to a power bank I've used it for at least an hour or so. And when you put all of those together you essentially I was able to get my own smart glasses running but then I ran into one particular big problem and that is that the view find doesn't have any input. It's cheap at the moment because it doesn't add that much functionality so it doesn't have any input. Most other smart glasses like the view find have a touchpad or something that you can use to interact with the apps but the view find doesn't. So for that I started building on a personal platform which I have called Rubai. Now what is Rubai? First off I would say a bit of an assistive name for a project but it's something I came up with in the middle of the night. But what it's actually meant to be is a platform for wearables for glasses like these in which the focus remains on prototyping because I really don't think that by myself I'll be able to build something which is really ready for production. And given that I'm a web developer it's also something that I really wanted to work entirely with web technology just like with HTML and JavaScript. The first part of the Rubai platform is just a display app which is running right here. It's working more than a single page application written in View.js. It has a couple of apps written in it like a camera and Google Maps. When the Raspberry Pi boots it essentially just opens Chrome in kiosk mode in full screen mode so that all that I'll be able to see here is the display is to view the display app and every app that I've built so far is essentially just a page in View.js. So if I want to open the camera app it just goes to the camera page and the view root handles the rest. But then the next problem approach which was essentially input like how should I be able to actually be able to communicate with the Raspberry Pi here and go to the camera page or the Google Maps page. And for that I just came up I tried a couple of things. I can do it via the Gamepad API. Did it fall down or something? Yeah, yeah, yeah. Testing one, two, three. Alright, cool. I really ended up noticing. What I've written was what I've created is a simple remote web application which I can open from my mobile which essentially just works as a remote control and what it basically just does is it keeps sending messages back and forth and a socket IO which is easy library to send real-time messages back and forth. And the best way to showcase just how this works I've prepared a little demo to show how the camera works. Now the View Find also doesn't come with the camera itself so what I've done here is I've bought a USB endoscope which is just a small camera which is essentially meant to get too easy to hard to reach places and using nothing more than simple rubber bands I've attached it to the View Find itself and then just to show how it works here in the upper right it's just a Rubik platform that you would normally have and then here is the mobile app and I can just press Camera and then it opens an application which uses the USB camera to get it which would normally be the endoscope but it's now using the webcam from my laptop and when I press Take Photo here it just sends a message to the display that hey, I want to take a picture from I want to take a picture and then it just creates a Base64 string which it sends back to the mobile and then I can just press Save Image As to store it locally on my device some other apps that I've built with it are the first one is for me actually the most important one which is just a simple browser and in general I use so I can just open any page here most of the time I use this to serve the presentation notes for my talks itself on a side note this has also been sort of my holy grail when it comes to the smart glasses because I often give talk and I just want to be able to see my notes and I want to have them projected right here I'm using it at the moment and this is something that I've been playing around with for about two to three years now and the first prototype, well they need it there is improvement now some other apps that I've written is YouTube so I can watch any type of YouTube videos here which is excellent when you're walking the dog but also when I want to get from point A to point B I've built Google Maps using the Google Maps API and the Image API to display the map here and this one may be a little bit freaky but personally I'm really terrible with faces and names so using face API I'm actually able to do some sort of facial recognition so that whenever I look at someone and it's unable to actually recognize the person and it gets added to the list here and then I can just tap on them and then I can insert their name so this way the next time when I walk up to someone I know that oh it's Anton or hey it's Irene so and for me that could be a great help for the record I haven't tried this in an actual environment yet what I've shown you so far mostly is something that I mostly just hack together using Angular and Vue and personally I just wasn't really happy with the whole concept because every time when I wanted to add a new app I would have to add a page to the remote control which was written in Angular and then something in Vue how they communicated back and forth was just a little bit of a mess so I just started to rewrite the whole thing in which the focus was to create something which would just be a low barrier approach to wearables and with that I mean just to offer a platform to developers to give them an idea on how they can just build simple apps for smart glasses it's something that could technically might be ready for production but I would say the biggest issue here is that the form factor would also need to be improved and I have absolutely no experience with that the biggest thing is that I want the whole project to be web based and framework agnostic essentially meaning that I want the developers to be able to build their very own apps using whatever tools they personally want to use to show that you essentially could build an app using either Vue or Angular, JQuery or vanilla JavaScript, TypeScript, whatever you want and I just wanted to streamline then how the remote would work via the mobile and to demonstrate this I would like to walk you guys through a simple example on how I can now build a Ruby app in the new platform the example itself is just that I want to be able to show a simple to-do list here the items that I still need to do I want to be able to add new items or remove them now to start building this type of app first there's just an app directory that has all the different apps that are available on the platform and the name that you give to the directory sort of uses an internal identifier you have to add a configuration file containing an iron icon and a name now an iron icon is just referenced to an ID iron icons library which is sort of familiar to font awesome this is mostly just a temporary solution and I really want to use developers just to be able to use their own icons and whatever name you give them is how they will show up on the apps page next you want to add just a simple index.html file which is what will be served which the display will show if the user collects an app so in this particular case I've now just created a very simple page with a red background and the next time when the user clicks on the to-do list this is what they'll see right here so the to-do app would have been opened at this point now next what we want to do is actually add some functionality so that we can actually communicate here via the remote controls and for that I've written a simple helper class in which you can just pass along a list of input form inputs which in this example what I first just want to do is I want to give people the opportunity to add an item so I'm giving it two items a text input and a button and so when this array is sent from the display to the remote controls then on the remote controls you'll see something like this now and here also for the button I've also indicated that if the user would click the button on their mobile the display app, this function will be called so if the user has filled something in and they press add item then the app here will get an event for that and then you can just add the item itself to a list so this will ensure that the to-do items will appear right here on the viewfinder now what you want to do next also is to allow the user to be able to remove any to-do items after they finish them and essentially it's practically doing the same thing as I did before with the add items so if you... did it fall again? sorry for that and the... so when there are any items in the to-do list then I'll just add a divider along with the title saying remove items and then for each item that I have I'll add a button with the name of the text the idea that if you press the button it will then remove it from the to-do list again so in this particular example I'm now sending the following information to the display itself so above first again the two items which allow you to add something but after that I also add a button for each list on the... an item for... I will add a button from each item that is currently in the to-do list and so then on the remote control you will get something that looks like this so the user can easily just press a button to indicate that hey I've done this particular task and removing items is again nothing more than just updating the array and sending a new updated list to the remote control and for me the most thing that I'm happy about here is that this particular example I was able to put together in about 20 minutes and that in the end has always been for me the main priority for this project that I just want to be able to create useful tools to try out my own things with smart glasses and that's essentially in the end also what this whole entire project has been about personally it's also just been a really fun way to just step aside my usual flow of HTML and JavaScript and just try out some new stuff so that's one of the things that I'm just really happy about and what I'm just mostly amazed about is that I'm capable of doing this with just a simple hardware that's available right off the shelf well to round off thank you for them for giving me this platform to speak I also want to do a very quick shout out to my parents who are currently watching live hi mom hi dad and with that thank you very much what is the total cost of the project like how much did you spend of the all hardware let's see the few so the question is how much the hardware is right yeah the few finish about 100 euros and then the Raspberry Pi is around 20 30 euros or so pardon there's a Google board for Raspberry Pi which you can use for speech recognition and you can integrate it like and just make Google Glass like from scratch what's it called how can I look it up thanks I think to use some kind of 3D printed casing because the project is cool but it looks a little bit kind of a mess it does no I completely agree so did you try because you can order a custom made yeah so the question is whether I've actually tried out something anything with a 3D printed case or like I said at the moment it's been in the back of my mind I have some friends who have printers and I do want to sit with them at some point and see if we can actually create something better looking than this but yeah then you also need to have the battery and everything implemented so I think that's going to be the challenge there just a quick remark if Trevor also is watching there are people from across the world who are interested in the project and who do have 3D printing skill and design to indeed combine the case and everything so thanks for your presentation just a quick one when do you think the hardware is going to be able to catch up with the software so the question is when will the hardware reduce in size enough but it will just be in someone's glasses they will be available at an opticians so the question is when the hardware will be available for the glass itself and yeah and then I'm just afraid that I'm just not much of a hardware person and I'm really not following that enough at the moment to give a proper answer I know that there is, at the very least there are the glasses out there called the fuchsic blades which look remarkably like normal glasses and project them and project the text actually on the glasses itself but those are also like really expensive at the moment so quick remark on the next person thinks of the question for this you have for example the face by north about $600 more or less and they look roughly like your glasses the branch are a bit thicker but not so much they do have a small display I think the interaction is really limited but this you can buy not at a normal optician they have like two shops in the US one in Toronto or North America one in Toronto one in Brooklyn but they are going to ship a version 2 anywhere in the world will they be on Ruben's platform still to be discussed but this is the kind of order basically the geekiest you look the more powerful in terms of features but then the more normal they look the less features like depth sensing since you're still thinking I just saw that which is as clownish as it gets it's a better appeal that I nearly stepped on to keep the mic so please when you leave the room if you see cans and everything please do pick it up it's a volunteer based event if you don't pick it up then we don't actually come to the ULB next time because we're not welcome anymore so please do pick up whatever it doesn't have to be yours this is not my banana I'm going to throw it somewhere so next, yeah, thank you Have you experienced with using a small camera as an eye tracker for buttons or voice? Not in particular Are there any keep variants? Well, I don't have anything in specific in mind but as you have attached a small outward facing camera perhaps there could be a smaller one facing your own iris and perhaps do some kind of tracking for a look up to the right for okay and left and something like that I like the idea of basically using an eye tracker to control the user interface but already the endoscope that I'm using at the moment I already bought it off a very sketchy website and it has a very low quality so I don't think it would even be able to recognize you but I do really like the idea something to keep in mind So we have time for a couple of more questions with an idea Dr. Steven Hawkins used to be able to type with his just a comment that you could also use muscular features to do some control It's not really interesting There is a project also at MIT at the media lab using I think the what those muscles are called but you have just a whisper you don't actually say anything and in terms of richness of interaction it is quite good in terms of input it works pretty well and I think it's open source A couple more still I'd use suggestions questions doubts My feedback is I don't mind feedback Have you thought of using computer vision for recognizing hand gestures using the small camera that's already built into the device Pardon Using hand gestures as a form of control you have the camera already on your device so we can try to do something with OpenCV to recognize certain hand gestures and using those hand gestures to control the device Right, so the idea is essentially that with the camera that I can put in here it's able to track my hand here and then I could swipe back and forth Is that correct? That's actually also a really cool idea I'm really glad that you guys are giving me the ideas So thanks for that You've got Bluetooth If you've got Bluetooth on the Raspberry Pi you could use a wireless mouse to move the pointer and have an on-screen keyboard on the So yeah, using the Bluetooth API as a tracker that's also one of the things that I've been playing around with mostly just using the GamePad API but the biggest issue where I ran across there is essentially that I would then have to create a whole interface here and when I started out with the project that was just a little bit too much work like I would have to create a virtual keyboard and everything but it is one of the things that I am interested in trying out in the future If you can point the camera into your eye you can also use a pupil movement tracker eye movement tracker I think that's one of the ideas that were suggested earlier Yeah, that's also cool Thank you so much That's it for today Thank you so much for your attention