 So, I will start immediately with introducing Ben Sr. to you, he's an independent researcher and inventor and he's got a topic I'm really looking forward to all congress about hacking how we see and especially hacking how the brain works because I mean it's not about the eyes, we can just use other eyes maybe but our brain is so versatile that I'm very much looking forward to hear what I can also do and what other people can do about that so that's your applause Ben Sr. I'd just like to start with some thank yous, thank you to everybody that's put this event on because it's been as inspiring as every year and I'm very grateful that I can be here to share some ideas with you and I'm very grateful that you're here. So I'm going to talk about hacking how we see and I'd like to dedicate this first of all to my son Arthur, if it wasn't for him, this project wouldn't have started and I'd also like to dedicate it to the many volunteers, many of whom I can't show here who've given their energy and their time and their inspiration and that's why this project's continued. So this is all about lazy eye. You probably know a few people with lazy eyes when one eye falls inwards or outwards might always be falling to one side or it might only happen sometimes when you're tired or real like it does for me. It might actually switch sides sometimes it's the opposite eye that falls to one side sometimes it's always the same eye. So I'd just like to know who here in the audience has a lazy eye or might have a lazy eye. Wow. Okay, brilliant. So before we can talk about what we're hacking or we need to first of all understand what it is we're going to hack. So what does it mean to see? So it sounds pretty obvious we see with our eyes but we don't really. We see through our eyes in our mind and that's something I want to bring home. The crazy thing is that your brain actually starts in the back of your eyeball. It's the only place where your brain is outside the protective enclosure of your skull. So your eye is more than just this movable window onto the world it's actually a spacesuit it's protecting you from viruses and bacteria and contamination. And the crazy thing is that there are a hundred times more brain cells in the back of the eye than there are axons so channels for taking that information back into the rest of the brain. So what's happening in there? It isn't just picking up light levels like a CCD chip and passing them back to some processor. Those channels are very specific. Some of the channels fire when your receptors that are specialized for different wavelengths of light under different light conditions fire. Some of them fire when groups of those receptors detect areas of contrast. So some receptors detect light and some detect dark and they realize here there's a contrast as an edge and then they fire. Others detect the movement of light across receptors in particular directions and then the channels fire. So what you end up with is this stream of fragments structured meaningful fragments tearing down the optical nerve at the equivalent of 10 megabits per second towards the visual cortex where it's processed at the back of the brain. So when these fragments arrive in the brain they're completely context free and somehow your brain has to put these things in the context of what you've been seen in the context of what you know and can recognize. It needs to combine it with your other body senses to make sense of the environment and where what you're doing I mean are you moving or am I moving you know and then all this has to come together so that you become consciously aware of the things that you want to place your attention on to so the loop can be completed and your eyeballs can track the thing that you're interested in. So this is a simple schematic of some of the basic parts that make up the visual cortex. All of these parts are effectively what they are literally neural networks they are neural network classifiers right? And these are operating subconsciously they're pre-processing those fragments and at some point later past this subconscious stage you can become consciously aware of things. So these are some of the specific jobs that these units are doing and what they're doing is they're interacting with each other they're negotiating the output that these classifiers are flying backwards and forwards and roundabout and they're sifting a search space you know they're trying to find the probable what is it that you're probably looking at given this crazy mess of signals that are coming in. So these are all just words can you experience this yes you can. So we've seen in this congress and other congresses how neural network classifiers can be attacked. You can subtly manipulate the inputs so that the classifier makes mistakes it makes the wrong classification and optical illusions are exactly this they are adversarial attacks except they're operating directly onto your brain they're altering your perception of what's probably there and you can do this at many many levels of abstraction there are so many kinds of optical illusions but I'd just like to show you a few just to make the point. So take a look at the center of one of those circles and the other ones probably start rotating except they're not rotating it's completely fixed it's fooling the classifier in your brain for detecting rotation. This is a really famous picture from the 1890s the artist has deliberately made it ambiguous you can classify it as a rabbit or you can classify it as a duck which is pretty interesting but what's much more interesting is that you cannot see it as both at the same time try as hard as you like you might get quite fast at switching but at some point your brain makes a binary decision this is a rabbit or it is a duck oh and finally I'd like you to play close attention to the top of the mountains beyond the lake there's no lake there that's a white wall at the bottom of a garden so what you perceive is even influenced by your conscious thought well very much in fact yeah you misdirection and suggestion you know the stuff that magicians work with on a daily basis and leaders of the free world yeah so let's get back to talking about what it means to see with two eyes so we each have two eyes and each eye sees a flat image of the world and our brain brings these two images together and in primates and quite a lot of the carnivores it's evolved that the eyes are at the front of the skull looking forwards and this gives the most overlap possible between each eye so both eyes are seeing a lot of the same stuff and this gives us the ability to accurately estimate distances and that's pretty important for primates right and why does this work well on one level physically it works because the eyes are separated horizontally each eye has a slightly different view onto the world so these are two pictures taken 10 centimeters apart and at the bottom I've overlaid them and you can see that things in the foreground have this overlap that this parallax now if you you know virtual reality headset or a stereoscope were to look with a left eye at the left image and the right eye at the right image you wouldn't see that overlap what you would see are branches popping out towards you okay your brain your visual cortex is removing the parallax and presenting it to you as a feeling for depth and this is something this is important this is very relevant to lazy eye this is something that you learn in the first weeks and months of your life when you see a baby in the crib playing around with a teddy bear or something it starts off with both eyes all over the shop and it learns to look at one thing at a time with both eyes and it knows that it's seeing one thing with both eyes at once the brain does because when it's looking at the thing it's touching that parallax has disappeared and at the same time the brain is putting together a feeling for distance it can feel the sensory inputs from the eye muscles so it can feel the angle that the eyes are at and it can feel the thing that you're touching or the amount of time it takes to travel across the room to reach something when the child's a bit older and it can put those two things together and give you a feeling for distance so when you look at something can your eyes come together ah yeah that feels about that far away I know how far away that is that's how it learns to develop a feeling for distance and this becomes paralyzed in a way parallelized if you look past the object so if you each hold up a finger and you look at me for example you'll notice you'll perceive that you've got two fingers doubled up and as you're walking around or you're running through the jungle being chased by a tiger you're in parallel your brain is seeing these different amounts of parallax for all the things around you and giving you this sense this feeling for depth and the space that you're in so where's this happening it's happening really really really early in the visual processing system the inputs from the eyes are rooted to this area at the back the V1 cortex which is kind of not exclusively exclusively where the root of two but this is where most of the processing begins and that's where you find these things called ocular dominance columns so what's an ocular dominance column well these stripes represent the inputs a million from each eye million plus the dark stripes representing the inputs from one eye and the light stripes representing the inputs from the other and if we take a look inside those stripes a one by one by two millimeter chunk overlapping two stripes you'll see something like this this is called a hyper column and this is a crazy thing you find in the brain it's a multi-dimensional classifier in this case in the visual system it's classifying orientation and color and the stuff at the front it also classifies in different parts of the brain sound it classifies touch and what's happening is the inputs from the left and the right eye millions of times are going into one column from for example the left eye and another column from the right eye and the further you go away from that point down the front face of that hyper column the more work is being done comparing the inputs these matched inputs from both eyes seeing what's similar and what's different and that is tangibly where the outputs from this classifier are saying these things are the same or these things are different that's where parallax is detected so where does lazy eye come from a lot of things can go wrong during gestation a lot of things genes can fail to express properly they can be toxins in the environment they can be mutations the mother might be malnourished might receive an injury etc etc etc and the eye is a very sensitive organ a lot of things can go wrong corneas can get damaged lenses can be misshaped they can be too much fluid in the eye too little fluid in the eye the eyeball can be misshaped the optical nerve can be damaged the skull might be the wrong shape so the eyes too high too low to the side and at the same time babies in the womb are developing three to five hundred thousand neurons per minute so a lot of stuff can go wrong but many of these causes have the same outcome and the outcome is that the input from one eye is radically different to the input from the other eye and what that would normally mean is that you get this situation where the brain can't bring these two images together anymore there's not enough similarity can't figure out where the overlap points are so you see double they're like when you're really drunk right really drunk and from an evolutionary perspective confused monkeys are lunch so evolution has selected for a plan B and that's called ambiopia if the brain detects that it can't fuse both images together it suppresses one of the eyes it inhibits the eye so that you only see with one so you see clearly which means that you've lost your depth perception and generally you've lost part of your field of view and where does that happen well there are a lot of competing theories it's not really known some of the most recent hypotheses are that there are concrete conflict detecting cells that look at matched regions of the retina and if the inputs are too different then it goes whoa we're not looking at the same thing are we right time to switch off and they probably inhibit the lateral genticulate nucleus which is this part I've ringed which is even before v1 it's right where the optical nerve inputs come together so the eyes functioning these electrical signals are arriving at the LGN and then they're being inhibited inhibited so they only carry on in a very weakened state so the next step is or the next consequence is often strabismus so lazy eye has these two components the suppression and then the lazy eye part strabismus which is when the eye falls to one side so it's also not really clear why that happens it might be because the eyeballs always in this tug of war with muscles on both sides so that when it's positioned it doesn't just flop about it's held in tension and maybe one group of muscles is just stronger than the other so either gets pulled out was or inwards or maybe it's because the brain really would like to have a signal that's as different as possible so that it's easier to filter out the problem is excuse me that evolution didn't know about eye doctors so nowadays you can have eye doctors that can you know shoot freaking lasers at the cornea and reshape them and take out lenses with cataracts and all sorts of magic but evolution didn't put this reset switch at the back of the head so you can't reboot the system and the trouble is even once you fix the original physical problem you've still got one eye pointing in the wrong direction and it's turned off so now the problem is how do you solve the original evolutionary solution to the first problem so I just like to talk quickly about some of the consequences the personal consequences for people with astrabism this lack of depth perception means that there are going to be more falls and stumbles and accidents which is stressful it generally means that they're less refined in using their bodies which is stressful it means that for people that lose part of their view field they're not so aware in social situations always of who's saying what so it means that you've got to position yourself carefully and it's another stressor it's a stressor if it makes you feel ugly it's a stressor if you're talking to people and you can see they can't tell you know if you're looking at them or someone else yeah stress stress stress it's not so easy you know sociologically we're talking about seven million people a year so it's about five percent of all people that are born at some kind of disadvantage which might not always be necessary perhaps we can do something about this and because it's across the board and 74 percent of the world are on less than $10 a day most people aren't going to have very many options neurologically what are the consequences well you have you were born with all of these structures to fuse the eyes together and you were born with the structures in your brain to coordinate eye movements they both look in the same directions and you were born with the ability to perceive depth but this topographic disorganization means that stuff wasn't connected together properly in those first phases of life you didn't learn to put this stuff together it's an immature system and in the worst case if one eyes if one of the side of these optical dominance columns are constantly deprived of input those neural connections decay over time so you can see here instead of stripes you have these thin isolated islands and that can lead to blindness sorry so what happens now you might come across if you have this a vision therapist so this is Marco one of our volunteers and they are like speech therapists who can recover somebody's voice after they've had a stroke after they've had brain damage or a physiotherapist you can do the same with limbs but you're very unlikely to find one of these people there are around two thousand in the entire world a thousand of them in America and they can easily charge north of four hundred dollars an hour so unless you're wealthy and you've got good insurance you're far more likely to come across a hundred times more likely to come across ophthalmologists and ophthalmologists for like the last hundred years have been pushing patching and surgery so patching is this idea that you cover the strong eye to make the weak eye stronger and this is a really outdated metaphor it's a misleading metaphor and i think it's a counter productive metaphor but that's not to say that it doesn't have value the patching i mean because the patching means that at least the optical dominance column for the suppressed eye is forced into activity and that prevents it from decaying on the other hand if you overdo it and you patch too much the brain says well okay look i just want to use one eye what are you doing to me you want me to use that eye fine and then it suppresses the other eye and it switches over and even if that doesn't happen you're only training this monocular ability to see one eye at a time you're not training this coordination between the optical dominance columns where all this other stuff comes in the parallax to depth perception so some kids get lucky some kids through patching a few hours a day somehow they catch the curve and they are able to rehabilitate their monocular vision but it isn't the majority so the remaining option as far as ophthalmology is concerned is surgery to cut and shorten and stick back together your eye muscles so the resting position of your eye is more or less in the middle of your eye socket so this is cosmetic and that's not to say it's without value it can improve the quality of life it can make you feel better about yourself but it doesn't rebalance the rebalance the brain it doesn't mean that you're going to recover this monocular vision but it might be an opportunity because with the eyes roughly straightened it means that you can certainly look through the lenses of even the cheapest VR headsets to be used with the cheapest of phones because these headsets do something very useful it's less about the virtual reality and it's more about the ability to very precisely target each eye and control what we can present to it at a very low cost so what if after the physical problems been solved we could hack past that lockdown we know that these classifies in the brain these structures these optical dominance columns we know that they learn and they adapt through training we know that they're flexible and plastic so can we build environments that can subconsciously reboot that binocular vision so now we get to what we've been doing these are i'm going to try not to very basic experiments and that means that they are very basic hypotheses because they're based on anecdotal observations okay this is all they are anecdotal observations because these are interactions we've had during development every time i sit down with a new person who has lazy eye we discover new phenomena it means that four or five things need to be changed on the system nothing stays constant at the moment yeah constant learning process so let's get falsifying the first hypothesis well the main overriding hypothesis is that you know this does something we want to falsify that it doesn't do anything yeah so the first hypothesis is a participant cannot use both eyes simultaneously so we test usually all of our participants with classic physical techniques with a vision therapist beforehand to see that they can only see with one eye at a time so with the headset on first of all we want to confirm this suppression so the image at the top is what the person will see in the headset the left eye sees arrows pointing to the left the right eye sees arrows pointing to the right and in a normal situation your brain goes okay i'm looking at the same thing let's try and overlay and what you perceive is this crazy overlapping mess of triangles pointing in opposite directions if you show this to a regular person with lazy eye i've discovered not alternating strabism so changes from eye to eye but regular fixed on one eye what they see is arrows just pointing to the right or arrows just pointing to the left because there's an instantaneous suppression so sub hypothesis a participant cannot deactivate suppression to see with both eyes we discovered really early by chance that we can suppress the suppression how do we do that what we do is we have blinkers on the left side for the left eye for example so the right eye sees arrows at the top the left eye sees nothing and the left eye sees arrows at the bottom and the right eye sees nothing and because these regions aren't detecting conflict suddenly all of our participants have seen arrows pointing right and left at the same time so they're actually using both eyes at the same time which is weird and shouldn't be possible but that's not so useful i think because we want to get to the point where people are using both eyes to look at the same things so what we want to achieve is to have both optical dominance columns receiving an equal a signal of equal intensity so the next type of sub hypothesis is that a participant cannot use both eyes with suppression active yeah that's what we want to falsify so in the normal condition in the normal state the signals from both eyes have the same intensity this is frequency modulated electrical signaling the left eye seeing an arrow to the left the right eye seeing an arrow to the right and in the brain it becomes perceived as a six-pointed star if one eye suppressed the lgn inhibits that signal so less signal arrives at the brain and the brain perceives an arrow pointing to the right only so how can we match up those stimulus the stimuli to the v1 what we can do is we can just drop the intensity of the signal from the non-suppressed eye and that's what we do the person just tilts the head up or down and that changes the brightness ratio the luminance ratio between each eye and in again all of the cases of regular strabism or regular ambiopia there's always been a point where the person suddenly sees arrows overlapping and pointing in opposite directions which means that we've brute forced our way past their suppression and that's a ratio that we can keep and reuse in other scenes in other environments so i think that's falsified we've shown that we can deactivate suppression and we've shown that we can overcome it and we've also had the insight that you know in these initial physical tests why was it always the case that the tests were showing the person could only use one eye at a time and i think what's happening is it's the tests are confounded physical tests are taking place in a room and there's always a backdrop there are walls and a ceiling and a floor and that's providing enough input that the eyes this conflict is being triggered and the eye is always being suppressed and inside a VR headset it's a different kettle of fish just worth mentioning i think so the next sub hypothesis right we've broken through the suppression they can use both eyes simultaneously the participant cannot fuse the input from both eyes whilst being amblyopic and strabismic well we've overcome the few the suppression how do we overcome the strabism so one eye is misaligned what we can do in a VR headset we can just rotate the entire universe for the misaligned eye symbols so this way the same object no matter where the eye is appears at the same position in the retina and as far as the brain's concerned both eyes are looking straight ahead fantastic and this is what we do the person has the headset on and we have these two circles actually we have different versions of these environments for kids and for adults and different situations but this is the sort of our classic at the moment one circle has cyan at the top one circle has yellow at the bottom the circle for the eye that's straight is fixed in local space so it does this as you move around stays where it is and the other circles fixed in global space so all they have to do is position their head until the two circles overlap and hold and that allows us to extremely accurately measure precisely what their misalignment angle is and we also get this crazy effect where when the two circles are apart there's the cyan there's the yellow and when they come together we know if that person is fusing because they see impossible colors the brain sees cyan and black at the same position and yellow and black at the same position and it doesn't know what to do with it and it starts to shimmer it's really incredible you know impossible colors are awesome and we hear this little and then we know so i think we've falsified that at least for some people in fact i think it's better than just some people we can achieve basic fusion which is a limited form of binocularity so binocularity is just using two eyes and binocularity is using two eyes together and this seems to be quite stable and precise so great we've overcome the suppression we can compensate for the eye misalignment can we push further into those optical dominance columns can we start to perceive depth let's falsify the hypothesis that we can't so we this is one of several environments for depth perception and this is a based on a classic test where you have this diamond background and four circles and we take into account or we make use of this parallax effect we just offset the circle for each eye and this is the way that every all depth perception in virtual reality works you just offset an object in each eye so you get this parallax and your brain perceives depth so it seems like the circle with the offset's coming out of the screen at you or coming towards you and then randomly we pick a circle and each time we pick a circle we reduce the offset so the depth from the background appears to be getting less and less and they just have to look right or left or up or down to indicate which one of the circles is popping out of the screen and what we found is that despite our initial testing showing that almost everybody had no depth perception most participants can see the first one or two and actually that showed us that standard tests produce a lot of false negatives because partly i think because of this conflict issue that the conflict cells are activated because of the background of the room that they're in but also because the initial offsets in standard tests are just too ambitious quite frankly and we also found for the few people that have been able to come back because they live in Leipzig and try this a couple of times they've been able to make quite rapid progress from just seeing the first one or two to the first four or five to the first seven or eight which implies to me that there may be we really are activating something and something's happening in the background here so we've broken through the suppression we've compensated for the eye misalignment we've begun to push up into those optical dominance columns to to do not just you know fusion but also seeing differences and detecting parallax and getting a feeling for depth the person takes off the headset the eyes are still looking in the wrong direction their brain suppresses everything and we're back to square one what was the point so this is where the really novel and innovative stuff comes in and this has been the hardest part can we actually get somebody's eyes to straighten up and maybe even to stay straight so we had many many many false starts many ideas that ran into the sand until this simple hypothesis came together which is that this is a subconscious process so let's treat it as a subconscious process and probably the desire to maintain fusion is going to override this habituation for suppression so what does that mean we did this we use the front facing camera of the phone and it beams in what it's seeing to each eye where we take into account the misalignment and we take into account the luminance ratios so we can break through suppression and the person can generally see both fuse both of those images so they can walk around using their eyes as if they were both straight okay that's cool then what we do we very very gradually straighten up the image in the misaligned eye very very slowly and nobody that we've tried this with so far has been consciously aware of this even when it's pointed out to them even when i ask them can you feel this can you sense this it happens completely subconsciously they maintain fusion because they can still see the yellow and the cyan bars the yellow and the left eye of cyan and the right eye and there's no doubling up but they maintain fusion and the thing straightens up and straightens up and straightens up and at some point the heads that go straightened it's like what was that huh yeah thanks i think this is a big deal too so there are lots of other things that we're going to do here but we already do this when you're watching videos um so i think this is could be great for kids your kid spends five minutes a day watching their favorite cartoon why not 3d videos yeah integrate this with games that i've got a list as long as my as long as my arm of things that i want to build to improve robustness so that this really trains that system but we know this can work so i think in conclusion eye skills isn't doing nothing it's having an effect and that's not a bad thing to build on the question is how effective is it and for who because everybody with a lazy eye because there are so many different causes and it's a neurological thing they all seem to have very subtly different symptoms and very subtly different perceptions of the world so we need to start breaking it down into different symptoms and seeing what works best for which groups of people and the big question is with practice because we haven't had a kit that was ready to take home yet but we're nearly there now with practice might the day come when the eyes are straight and the brain is totally comfortable with this and familiar with this and you take off the headset and it just goes yeah fine this is cool yeah i don't need to suppress anything i'm not in conflict anymore i've got used to this yeah that's the goal so in 2019 we have our emphasis is going to be on trying to really validate some of this with an internal study we have a university department in geesin that wants to get involved and do a study but we're looking for more participants we're looking for other researchers we're looking for more people who want to be involved and this leads me to the bigger picture can we build something to really accelerate progress in this field it's been a hundred years that people have been looking at this so i've had this dream for a couple of years if this was open-source software which it is wouldn't it be cool if it became the go-to platform for research because researchers would have more freedom to rebuild the system change the system add new components without worrying about commercial licenses they'd have the flexibility to use each other's components off the shelf very importantly they have better repeatability instead of just describing what they did in a paper they can share a build this is what we used and that means that the same build can be shared between multiple groups perhaps cross-disciplinary groups the developmental psychologists the ophthalmologists the neuroscientists each of whom bring different skills and different insights to the table and i think longevity is also important if it's open-source it doesn't necessarily die if a company goes bust and this takes me to the even bigger idea behind this i think this could be and should be one of or the biggest citizen science experiments in the world i think one of the thing that's really held back progress is the small cohort sizes and the costs involved in getting people to come into the department and that you only see them once every few months and maybe they were having a bad day they were hungry or they were tired and it's particularly good or it's particularly bad and what if hundreds of thousands of people are using this at home and what if we can use classic you know gamification and classic game testing techniques to ab test different environments and to see which work most effectively for different groups of people with different kinds of symptoms let's use gamification and data science for good instead of evil and that means that the professionals can take a look at very tightly focusing people with specific symptoms and with all of their experience and their expensive machines that we don't have fmri scanners and all the rest of it they can take the environments that have the most positive impacts or even the most negative impacts and try to study why they have a better means of accessing those behaviors so i'll just quickly mention i'll talk a little bit about the framework because the app that i showed you the system that i showed you was really to help us figure out what needs to be in the framework and the framework is still very immature but it's heading in the right direction so the app is very simple to modify to put in new environments or take environments out this is perhaps the most important part there's a camera rig inside there which i call the lazy camera rig because you can take any unity game and you just rip out the virtual reality camera and you plop in this camera and it knows about eye misalignment and luminance ratios and eye straightening and it knows how to talk to the right data objects to store this information over time so that you can see rates of progress and the camera has these microcontrollers on it which can be extensible which are extensible sorry and they let a practitioner kind of manipulate in in real time what the person's seeing adding cues so you can tell if the person's seeing with both eyes or adding conflict or changing the luminance ratios manually or swapping assets from high contrast assets to low contrast assets etc etc etc and that works because we have this real-time remote inspection API over websockets so any number of practitioners can see what's happening inside the VR headset of the participant and manipulate that environment they can take control of it they can change the parameters of it they can do all these things remotely which also means that you're not bound to people coming into your clinic you're not even bound to people being in the same country and finally to keep the costs down we've built these um this gesture control system which doesn't rely on your typical VR UI inputs with reticles and menus and things because you first of all can't tell what the eyes are doing and secondly you don't want to interfere with what people are seeing so it's all based on just moving the head and holding the head still and looking up and down again the goal is to keep this as cheap as possible so that all somebody needs is the phone they've already got and you know an eight dollar headset with reasonably large lenses that they can put their phone into so next year um out into the world um that's the goal i'm hoping that this kicks off a little revolution and i'm hoping that it brings together the ophthalmologists and the vision therapists and the neuroscientists and i'm hoping that we manage to get this to spread across the world and i'm hoping that we can build an organization that isn't going to just run out steam and energy and time and money i'm hoping we can build something that's going to last and most importantly none of this is going to work if we can't deliver it at a price that people can actually afford you know so to do this this is this is where i beg need more people with more skills we've got ideas and designs for headsets that have two outward facing cameras for example so you have a stereoscopic view of the world so you can also walk around with the headset on and perceive real depth we need inward facing cameras that do eye tracking but we need them for a couple of dollars not a thousand dollars i've got some ideas but i need people with skills maker skills legal advice we are going to be stepping on a few toes with this neuroscientists graphical designers game developers growth hackers accountants you name it if you have any spare time if this is interesting to you if you want to get involved please take a look at icicles.org there's a little form there on volunteer and you can tell us a bit about what you can do and finally i'd just like to say thank you to some of the organizations that have helped us get this far and i'd particularly like to single out the prototype fund because they gave us money they are brilliant they fund over 40 open source projects a year and they give you money and they get out the way and they let you get on with it so take a look at them and that's all i've got for you thank you very much thank you so much ben and i assume you have plenty of questions and you're already lining in front of the microphones so i take first microphone two and i check for the signal angel so right now i'm focusing on trying to make this something that people can just use at home and every day i'm up to i'm totally open for new ideas but i think every day it should go through this kind of calibration process because that lets us see whether there have been improvements in what way before so for example figuring out what the current suppression ratio is in the i miss alignment simple as that so in that in that slide that you saw just try and keep this quick here you go through one step at a time and each step unlocks the next step until you get to seeing straight so it should be a very user friendly process and i think there'll have to be another version which has all the bells and whistles and all the environments we took out of this one for using a clinic where you can do everything possible with it does that answer the question awesome then we take again microphone two sorry can you say that again the performance i'm not quite sure i understand the question is the performance degraded if you apply these techniques it's a game slower than before oh i see does the performance degrade is a game slower than before yeah no shouldn't do i would love to know probably get in touch that's right try it out i take microphone one now hey over here to write um question would be you said that people often get surgery after they had a condition like that what is the condition afterwards is are the eyes fixed in place or how much does it actually solve the problem or create a new problem it doesn't solve the problem except if you define the problem as being a cosmetic problem you can do eye surgery up to six times before the muscles are shot and you can't do it anymore um i think fairly often you have to do it multiple times because the eye keeps relaxing i i suspect that's actually also a feedback process that the brain would like to have the eye falling to one side to make it easier to filter out those signals so effectively it's shortening the muscles or something that's right it's just shortening the muscle so the the resting position is just dragged back more or less into the center and i i think it's partly to do with the fact that most ophthalmologists on the one hand they really believe that beyond a very young age there's just no hope anymore and that has been pretty conclusively disproven by the neuroscientists the brain stays plastic and you can learn new things just like speak in the speech therapy case or the physiotherapy case and of course yeah it's it's just it's what the institution has developed to do it's developed to do eye surgery that's what they do but that doesn't mean it's the right thing to do but i hope we can use it so we can add these techniques onto the surgery so that it becomes more than just cosmetic it becomes the gateway to actually recovering vision with both eyes so i would take microphone two now and quickly get a feedback from the signal angel please whether there are questions no okay so microphone two please okay i have two questions a i have alternating struggles so i wondered how you're able to work with that because actually i can switch like instantly sure so i see like i know from those tests that usually if you give me two pictures i'm always like left i see that right i see that sure so thank you and secondly i always have a hard time explaining people what it means to lack depth perception because i have like my whole vision but i keep on bouncing into things and i'm always using this hey i don't have steric vision so that's the reason and then i always try to be like okay so just close one eye and now do you see a difference and some people see it and some don't do you have like a test for people to explain what means to lack steric vision well first of all i'm incredibly excited to sit down with you because you're now the third person with alternating strabism yesterday i sat down with fabian in the audience who has alternating strabism and before that uh cliff and alternating strabism appears to be a whole different game a whole different thing and it seems to be where we can make them quickest progress well i i'm not i'm you know i can't don't make me make promises or get any more hopes up or anything right i mean you know all we can do the only person that can help you is your own brain you know it's your brain that's got and we don't know we don't have any definitive proof yeah that's what we're working towards but you i promise you this you will have a very interesting experience do you have some tricks to tell steric or binocular people how it is to see but monocular i think it would just in your case because you have something that's called panorama vision where because of the alternating nature your your brain knows that the eyes are looking different positions and it kind of puts them together like to build sherma like two monitors and extends your field of view to some extent probably with the portion in the middle where there's some suppression and that's something that is very useful if you're a truck driver but it's not necessarily something i think that people with regular vision can experience it's too far out we could perhaps create a picture no we we could we could we could produce it and we could take the input of the camera and we could map it to this panorama like view with a dead spot in the middle or something we could do something that might give people a feeling for that yeah right thanks stay in touch so microphone one please yes thanks a lot both for the talk i was wondering you mentioned that you have the program that adjusts the position of the eye inside the headset what happens when you remove the headset after the alignment so what happens um this is really fresh so we haven't had people being able to take this at home practice it for any length of time but what typically happens is the headset's removed and the eyes are together and they stay together for a little while until you know old habits reassert themselves the question is what happens after longer exposure to it and all of the other techniques that i want to build in to increase robustness to increase this the ability of the person to hold their eyes straight and to feel comfortable with them being straight and not go back into this suppressed state that is 2019 thank you great microphone two please thank you for this during your experiments you don't if i understand it correctly you don't you are not able to work against the underlying cause of this so whatever is wrong with the eye do you already have some experience if maybe someone had an operation to reduce the impact of whatever caused this and how did it impact your experiments yeah this this can only work if those original problems if it's possible to tackle them and they have been tackled and that's generally something as simple as the person wears glasses because the most common cause is that one eye is stronger or weaker than the other beyond that again i think the only way to discover these things in practice is to make it available to a great number of people and to have them explain very carefully what the history has been so we can begin to piece together where this stuff can work and where it probably isn't going to work all open for data gathering basically thank you microphone one please have two short questions the first one is do you think this app would be helpful for somebody who already had the surgery at a very young age um but still would help this and the other one is um would you think it's more useful to start with the app before surgery or just wait until somebody is a bit older and then use the app because surgery is usually at a very young age so i'm i'm kind of loath to say things that are too concrete before we've actually done trials and what not but i think it's definitely worth trying yeah definitely i don't see why surgery it's only adjusting the eye position and that's all it's doing yeah so why not the surgery is useful at the moment because it means you can see through the lenses of the headset if you've got extreme strabism you look past the lens so you're just not seeing enough of the screen you can't do anything and in that sense the surgery is very useful thank you and would you need like adding to that question would you need another headset that has the lenses at other positions then like adjustable headsets well i went to Seoul and i talked to LG about some ideas for a full viewfield headset and in that week Samsung issued a patent which is actually a very good patent in the sense that um they're obviously thinking about this and it does pretty much everything right so they have OLED flexible OLED displays and they've been thinking about how to organize the lenses in such a way that you can you know in a headset now you look straight ahead oh there's something over there oh i'm moving my whole head you need to be able to move the eyes fully in every direction that's what full viewfield is and yeah they have this patent they figured out a lot of the problems that need to be solved that's obvious so i hope in a few years maybe if VR doesn't die the death um samsung will bring out this headset and we can abuse it for our own purposes nice so we have performances left and two questions on the microphones and then probably a thousand questions afterwards directly to you so i take microphone one first uh did you test your hypothesis that the usual tests fail because of the background environment because you could easily add environment and background to your VR VR we have obviously this is still a volunteer project so we just have super limited resources energy time you know the disparity between what we want to do and we can do is just infinitesimally small um if you were interested in getting evolved or somebody wanted to answer that question please please join us thank you so and microphone two please um so i think using a unique SDK is a good way because there are a bunch of um developer game developer that's building their very content using it but have you consider using web VR because it's basically how you can unite all the different platforms the VR um headsets and there is a standardization process from the w3c so if you can go down a layer of the stack then you're going to reach a much wider audience and with a ton of content declaring all the videos and everything we had a look and think about that um but i decided to be pragmatic unity is the main environment that most game developers use and it means that it's easy to because it's not just about the the uh the language or the compatibility it's also about the the assets that are out there the libraries that are out there the skills that are out there the size of the community and to make it as accessible and pragmatic and reliable as possible um i've gone with unity because it also supports all the headsets um time will tell if that was a good decision or not but it's a good suggestion thank you so that's also your first contributor from this group maybe others as well um already volunteering thanks again for this impressive demonstration here um and thank you all for coming for the first talk on the last day of congress thanks again