 imagination activity, imagine that you can take all of your experiences that you have made at this Congress, take them home with you and share them with your friends and your family, everything. The first time that you tried mate, the moment, endless moments of going through the labyrinth of this Congress hall and getting lost for the hundredth time, trying to enter the club downstairs and feeling like you can cut the smoke with a knife, or actually being very surprised and seeing Snowden appear on stage. So all of these wonderful memories, imagine you can record them and you can take them home with you. Today this kind of recording activity or possibility is actually possible to make, there is technology that could theoretically record everything 24-7. Kai Kunze will give us an overview of sensual and actual actuation technology and he will tell us what the future holds for AR and VR. He has brought with him some experience, some projects that are actually happening right now and will give us an overview of how we are already pushing the boundaries today of human sharing experience. Kai is a researcher in wearable computing in AR and VR and really cool, he is also a founding member of the Japanese superhuman sports society. Thanks. Is this on? And with further ado, Kai Kunze, enjoy. Is the mic on? Yeah? Okay. Thanks a lot for the great introduction and thanks a lot also for waking up so early in the morning to come and join the talk. It's always a special occasion for me to talk at the Chaos Communication Congress and I hope then for the next 30 minutes or so on, I won't be wasting your time. So the talk will be about kind of from superhuman sports to amplifying human senses and as the introduction said already, we are in the year zero of consumer available VR devices, so Oculus, the wife and also the Sony VR system and the cheaper cardboard variants like Google Cardboard and so on and we are in the brink of getting consumer level AR and you know kind of already my grandparents have HoloLens so in this case I always like to include pictures of my grandparents because they are awesome, they always try out any type of technology I give them and they provide valuable feedback. However, I have to start with a big disclaimer and I will disappoint you especially regarding the great introduction I got for the talk. Please don't trust me, especially about predictions. I was working 15 years now in the wearable computing field and around 2004 I thought, you know, today we would run around like this and I see a lot of you kind of wearing your head mounted displays and having your one handed keyboard, nobody, damn it. So I'm really, really bad with predictions and kind of that was me at the time. One handed keyboard, cubic, this is a belt integrated computer from from ETH, at a time you have to remember we didn't have smartphones so we had to carry our computing somewhere else and then you know kind of seriously a couple of years back I also thought this would be a thing, so Google Glass and this type of stuff. So don't trust me about predictions and don't trust me on what's coming next. However, for this talk you might wonder why did you get up so early in the morning then what I will do today is I will talk a little bit about a hot topic in Japan right now and that's superhuman sports. The idea for superhuman sports is to use technology to enhance human abilities over you know kind of what we usually can do and because you know we are in Japan we of course founded a society so since one and a half years we have the superhuman sports society disclaimer as said in the introduction I'm one of the society members and also one of the so originally the idea came from Inami sensei and Rekimoto sensei and Nakamura I usually don't like name dropping but especially if you're interested in human augmentation and human computer interaction check out all the works from Inami and Rekimoto and also follow them closely because they are two of my personal heroes and then you know kind of we have the society now we're trying to expand it internationally so since a couple of months we're working with the TU Delft the Sports Engineering Institute and I don't want to bore you longer however I just want to highlight one person in the advisory board and that's Robert Riener because Robert kind of saved 2016 for me because you know it started all with Bowie and so on and then Brexit and Trump but Robert organized the Cybertlon in Zurich I just wondered how many of you heard about the Cybertlon oh okay couple whoa nice has anybody been there oh yeah even great cool my kind of audience awesome so in this case this was an international competition for disabled competitors using bionic assistive technology so exoskeletons and so on and you know kind of that was just great and it's in the in the terms of what I want to talk to you about today but what I want to talk to do you about today is then superhuman sports and I kind of selected these type of topics so augmenting our body augmenting the playing field augmenting training and sharing and I will show you a couple of examples from researchers in the society so first going over to augmenting our body there's a lot of work also mechanically that's not so interesting for us computer scientists however it looks cool so I included it one is a company called Skeletonics they manufacture these exoskeletons they are completely moved by a muscle movement so nothing IT here but still cool and then another thing is the bubble jumper that's an idea that came out of a hackathon we organized at KO and we started you know kind of with first sketches and so on but the important part for also for our hackathons is you have to demo you have to implement it should be playing oh yeah so you see the combination of a bubble ball and sky runner and to make a very safe and nice sport the coexistence of safety and force I'm not so sure about the safety and now getting over this to something a little bit more serious a spider vision this is now work from Inami sensei and in this case this is really about extending the human field of view and it's very very simple they use the DK one and they have two cameras one in front one in the back and they overlay the two video feeds on top of each other like you will see right here and the interesting part is at the beginning it looks really terrible but after two or three minutes using this you can really separate the two images and you can even read or do other things so you know kind of then you can be environmentally aware the only trouble is and that's something you see here because you're wearing a head mounted display still VR set you know there's an offset from the camera to your actual eyes so it's very hard to interact with your environment but that's kind of the initial work where you want to get at the next work I want to show you is the synesthesia suit that's a full body haptic feedback suit made by Mr. Gucci and Minami Sawa sensei two of my colleagues at Keo media design and this was first done to promote rest that's a VR game for the PlayStation and what it gives you you see it here on the right side is it has a haptic feedback vibrators 24 on the whole body so you can get a touch feeling on your whole body and the idea Mr. Gucci sensei wants to do with this is he wants to give you simultaneously visual as well as auditory and haptic and touch feedback to put you in some kind of trance state and if you try you know rest with this suit it's quite an experience actually so you get you know visual auditory and touch feedback at the same time then moving over to augmenting the sports field here I want to highlight a work from Rekimoto sensei aqua cave you see a water tank you can swim in but the swimmer actually wears 3d shutter glasses and you have a 3d display around the water tank so you can then really dive in the great barrier near the great barrier reef or you can do competitions with sharks and similar things then augmenting training here I will show a little bit of old work maybe some of you will know but I just find it cool enough to still include it and that's galvanic vestibular stimulation by Maeda and also in Ami sensei and what they were doing in 2004 is they apply a tiny current behind your ear you have anode and cut out behind your ear and what happens if you apply this tiny current is you're messing with your equilibrium with your sense of equilibrium so you will tend towards the anode if you do that and then now the reason why I show this video is because you know this would be impossible to do for a researcher in Europe or US you see a student wearing this device you see the professor with a control unit and can steer the person just left and right and I have to say I tried it it's you know kind of it's not safe to try out we don't know about long-term effects but it's interesting and it's also user dependent so we kind of depending on the user and depending on your skin you might need more or less of this and the electricity but it's an interesting thing to think about especially for sports that require equilibrium or other things and then another work this is work from one of my students Takuro he was working on peripheral vision glasses and his first idea was to just you know give you notifications in a way that other people don't realize so you know I could look still into your eyes and you don't realize that I might get some notification on my peripheral vision I can still keep eye contact because so far you know if you look at your handy or if you look at your watch the other person will realize that you're looking there so you started with this idea and you know kind of the boring thing is yes it works for notifications so you can have up to eight different types of notifications I don't know why the video stop but that's actually not so important you can have eight different types of notifications the other person cannot see that you you're getting them however what he really wants to do and here we get to the interesting part is he wants to try to influence movement over your peripheral vision and there's some other research some interesting research from furukawa at all also a human conference they use some patterns on the floor to guide pedestrians to influence the movements of pedestrians and then Takuro wondered if it's possible using then also peripheral vision glasses and this is the first test this is partly unpublished but I thought want to share it with you so in this case what he's using is he's using some lines that move along this way and he lets people walk and guess what happens if you let the lines move faster yeah I heard so people will move slower so walk slower so it's already possible to maybe use it to influence speed and the next thing he wants to try is also try to influence direction so the next thing then is augmenting watch and cheer so in this case I want to show effective where that's glasses that can detect your facial expressions and in a very cheap or simple way that's masai and you see kind of the faces he makes he wears his own self-made glasses and how this works is he uses just photo reflective sensors in the glasses frame and he detects the distance to the skin and with this you can get user dependent very well what type of expressions a person is doing so you see them the I like the training face so you here you see the photo reflective sensors and the distance measures between the sensors and then you know kind of for the start he has to then calibrate the sensors once and here you see the principle so for example for smiling you know kind of your cheeks move up so it's easy to recognize that you're smiling and even laughter so eight facial expressions are recognizable and yeah and I like just masai making faces that's the normalization phase then the training phase and then the recognition phase that's user dependent but it's very cheap and if you want to build this especially smiling is easy you just need one photo reflector here above the cheekbone and then one maybe for reference and then it's easy to implement so of course you know kind of what we can do with this is we can put it into a VR headset and then your character in VR can have the same facial expressions you can think about remote participants in a in a in an event or so on and then also showing the mood or so on of the remote participants it's not playing yeah but I think you get the idea I don't know why the video stopped and you know kind of also more interesting than is you can also you know kind of maybe use this for design so if you want to design a very very scary dungeon you can you know kind of detect how people are now it's playing yeah how people are reacting to some elements in your VR environment and then do the correct adjustments and also you know you could wear it throughout the day and just get an overview over your daily activities so you know kind of how much smiling versus frowning are you doing also related to some of the activities you have that's interesting and that's in line with some work also from Reki Moto Sensei who did the happiness counter so here the idea is he uses smile as an interaction modality so you know in the morning when you wake up and your alarm clock rings you don't have to push a button but you have to smile to your alarm clock that's a little bit scary or creepy and because we are with this this next one is a little bit of a choke you know kind of if you if you recognize that you don't smile enough we can also make you smile so this is now a tune each smile is now activated over the electrodes as I said this is a little bit of choke I don't recommend anybody or I hope nobody will make these type of technologies in future but here the interesting part is people where students were playing with EMS and we recognize so one of the main most of the nerves to control our face actually come over from this side of the face so what you can do is if you put electrodes on different parts here you can actually activate different muscles the students wanted to get smiling working but first when they tried it on me was my eye was blinking and other things you can also do eye blink yeah and then I mean this is a little bit on the choke side as I said before however if you want to have references for more serious electric muscle stimulation work I refer you to Max Pfeiffer and also a Pedro Lopez they also have a kit out so to references to URLs if you want to build it your own or if you want to build your own EMS and also their source code for all of their papers is I think out there on Bitbucket and GitHub and this is also just cool you know cruise control for pedestrians and also muscle plotters so using your body as output for a computer however now getting back to the superhuman sports so the idea also for the superhuman sports society is to walk to work towards Olympic 2020 and we want to create actually an augmented game culture in Japan so not only inventing new sports and using new sports to pursue amplifying human senses but also creating this culture to make these sports so we're organizing workshops this is an example from from why Cam beginning of this year so you know kind of you have a lot of devices you get together researchers artists designers but also local people that are just interested in participating we do you know one day of ideation and trying out the different types of technologies and and research outputs so this is for example also check-in one of the researchers yeah you see all of the four people you know you have a view of all four people I think anyways and then you know at the end trying to design these games together so these are some more pictures from some of the workshops and here the idea is first of all as I said before we want to create a proof of concept by making and playing sports and augmented sports so not just ideation sketches or so on but really demonstrations demo or die and then also getting fit ourselves so far it didn't really work for me but maybe in future and then also creating this augmented game creators for the time being and now you might wonder how the slides what's the scientific impact I mean it's a lot of fun to do this but you know why am I focusing on this and if you heard my previous talk two years ago might know that I'm interested in recognizing and improving cognitive activities so concentration attention comprehension and I focus a lot on I wear computing that's my old talk a little bit of background about myself I was working with Jin's Japanese glasses company on sensing glasses that have accelerometer and also some low-level eye tracking in them and now since actually December I have some good news I have support from the Japanese government especially SOJST to sponsor a collective open I wear platform and I'm also looking for collaborators so to work on this because I think this is crucial to start now because you know I don't want to end up in a situation that we have today with our mobile phones that we have you know just more or less or that most people just have two providers two big companies that tell you what you are able to run on your mobile phones and I think I wear for me you know believe it or not because my track record is not so good predicting the future however I think I wear is the next stepping stone it's the next thing and I really want to have an open I wear platform out there so that's the background from myself and I think for me the super human sports is a great testing ground for these type of technologies and then really not only doing sensing but also really trying to amplify human sensors in this case I have to hurry a bit because I'm behind time and I want to still show a demo however here the ideas is that sensors digital tensors on some parts already better than human sensors so you know you can have higher frame rates you can have broader spectrum so you could see infrared and so on however what is lacking from our side is the interface to it so I wonder really can we create new and amplified sensors based on digital technologies and just going with the digital camera system analogy I would just want to give you a small example at the end of the talk now and that's a squint to zoom so you know if I would be standing here and there's a sign in the back and I want to read the sign and I cannot do it with my normal eyesight I will squint and you know wouldn't it be nice if I could wear some kind of glasses that automatically if they recognize that I try to read this and squint would just zoom it in and I don't even realize that I'm using technology and unfortunately I wanted to show you a demo on the HoloLens I have one here however I didn't manage to get this working on the HoloLens and I'm happy that I have a student George who could set up a demo on on a laptop that gives you a feeling how this works so can you switch yeah so in this case this is unfortunately just on the laptop but you can get an idea in this case I'm using a Tobi eye tracker to track my eyes I have a camera and just a simple open CV to track my face and then if I now go ahead and squint it zooms in I can show you also this on a on a document so if I squint it zooms in if I don't it stops and that's actually a very very simple type of interaction you could do I hope that could show it to you on an on an AR system but that's maybe something for for next year here again as I said you know kind of the desktop eye tracker and I can show you a little bit how it works so this is just an open CV system I think George uses the eyebrows and the relation to the nose and the size of my eyes so if I squint down this relation changes and it zooms in and if I stop it zooms out so this just should give you a small idea of what I'm trying to create or what I want to create you know things technology that's easy to use that's that you don't even realize that you're using similar to the analog glasses so on and also not only for sensing but hearing and other senses as well so this brings me to the end of my talk I just have some obligatory thank you slides so these are all people I'm working with and that contributed to the research you've been seeing and I have a special thank you slide for the students that actually did the work and especially George who made the demo you just saw but also Masai for the effective wear and the rest of the crew and now this is the end of my talk so now I think there's a time for a couple of questions remarks of violent descent so thanks a lot for your attention Wow really cool stuff that was so cool you have a chance to talk to Kai right now to ask him questions there are four microphones in the room so find yourself behind one of those microphones if you have a question also the internet is always able to ask questions if they are awake by now does the internet have anything yes one question from the internet yes is that code for eye tracking available which one so for eye tracking I can recommend you pupil apps so the eye tracking code right now I'm just using Toby IX so that's you know kind of Toby proprietary they have an API that's hmm it's just a desktop eye tracker however if you have want to have a mobile eye tracker I would recommend you the pupil apps that's an open source eye tracker they just keep the latest revision of their hardware close source and they have all of their code up on GitHub so it's very very easy to to adjust and they also have settings for VR headsets so you can easily use it you can just you know kind of use build your two own cameras that's easy use their code if you know kind of you have budget constraints the Toby IX is okay but I don't like Toby or SMI so much because they use close source APIs and it's hard to get to the actual images because you know you want to have for example pupil dilation because that gives you information about stress and other things and that's not so easy great actually let's go to microphone number six please so I have just a quick question about how open you are to be able to collaborate with different universities from around the world to use your technology in research I work in in the effective sciences department in the University of Geneva and we actually use a lot of motion tracking this I'm in tracking would be super interesting for effective science research especially as well our facial expressions we use that a lot so effective I would also be interesting so you've discussed how to use it for like opening up possibilities in the real world and actually enhancing everyone as a general population but how open are you to using it for research as well yeah very open I mean that's also the reason why I give the talk I mean now finally I get also a little bit of money to fund this from from the Japan side and I really want to find people that want to collaborate on on this side and I think you know kind of especially seeing how Silicon Valley now also invests in I wear and I think the next thing we will see is eye trackers in VR and so on and how close those systems are I would really like to you know have this collaboration so just try to catch me afterwards or just send me also a male would would love to to keep in touch great thank you and microphone number two please well thank you amazing talk and how important do you think will be the combination of external signals like squinting or looking at something and internal signal so like directly measuring the brain waves or with an EKG do you think that is important to combine those or it's important to combine those I actually I started looking into a lot of the EEG signals and so on and when I started this work but it's very hard to get into your skull so EEG also brain computer interfaces I think will take a while to to work and you also have this trouble of the signal processing is relatively hard and because I'm lazy I started with the eyes and the eyes give you actually already a lot of information you want to get from the brain I like to think about combinations so for some things we already working with also EEG and I think it's it's crucial to find the right type of sensing so that it works throughout the everyday life I think one example was also jeans and now with the with the JST project what we really trying to do is find simple sensors to give you some information about your your brain activities and so on and I think it's really a combination of multiple sensors and I think as I said the eye is very very interesting yeah thanks we have time for one more brief question number four please when I was wearing an AR glasses my brain messed the distance to objects so I couldn't grasp anything it just wasn't there where I expected yeah is there any chance this problem will go away this depends highly on the land so there's still this problem it's also in the I mean VR has this trouble with the with the the the perception you I think you know you will have always this issue I found AR so the Holo lens quite good for this but here the problem is also you cannot grasp something because the the distance to the object is too far so I'm still wondering I haven't seen something that's really you know kind of graspable graspable in VR where you have the right distances or so on I think either your brain has to get used to it which I think it does after a while it gets the distance measurements correct or we have to employ some other tricks to do that what you can do for that I'm not so sure great and with that we're at the end of the talk thank you so much Kai please help me thank I