 like now yeah so hi my name is Kenneth if you want to follow me on Twitter you have my my handle there it's wrote with oh age and not age oh like everyone tries to to use so that's me next so i work at a company called Intel and normally you won't associate us with browsers but we do work a lot on browsers and i myself is involved i've been involved with webkit since around 2006 so i've worked on browsers i used to work for no care so we did like mobile browsers i worked on the team that did the first webkit two based a mobile browser before apple so that was pretty cool at that point since then i moved to mostly work on chrome and working on standards mostly the w3c but lately also a bit on javascript itself next so our team at intel well there are multiple teams working on on the web itself but i'm in the web platform team so basically it is our responsibility to try to work on the platform itself and there are like different reasons for that but we're trying to keep the web open and relevant so it means that if the web is there but no one cares about the web it's not relevant any longer and it doesn't really matter so we're investing in the web and we do that by doing like a feature implementations and also working on specifications we also work on browser themselves to add these features and of course like optimize them so everything runs well on our hardware so also try just what's going on the web always like identify new trends see where we can participate and and maybe enable it something like let's say something like web vr on a chromebook so anyway we can help to basically make the bring the web forward is what we're concentrating on next next okay so today we're going to talk about sensors so i don't hope we yeah so you might wondering what are sensors good for well you can group sensors in different groups so we can talk about motion sensors these are for sensing speed direction and a state of rest so you could imagine using these like a game controller for instance for a maze game maybe for tracking like a head mounted display for vr and ar use cases indoor mapping uh photo stabilization or video stabilization as well next uh there are okay it went back okay so also environmental sensors so this could be for instance something like a parameter which can be used with like an operation sensor which can be used like figure out the altitude so this actually exists in in in some phones um you also have a magnetometer with sensors like the magnetic field next to it so so you can use that for instance as a switch this is what they're using for instance on the google cardboard the first versions so basically just moving a magnet as a as a button on your headset and that actually worked as a as a button so that was just using a magnetometer or you have like a light sensor so for instance at night you want the screen to turn dark for instance a map etc next so the starting point when you start looking at this is called deneric sensors so there's already support for sensors on the web so we already had like geolocation there was a way to get acceleration or device orientation but all of these sensors work differently they had different apis um they were kind of high level so if you really want to do something a bit more low level a bit more customized like it was really difficult so it was a base on like old api designs because today we have like a strip 2015 plus we have promises etc um but even worse than that some of the features just like work differently on ios and android like you had to like okay now this is like 90 degrees off because i'm on ios so let me modify that uh so that was like terrible user interface or like also like oh now it's using a magnetometer on one platform on the other it isn't um apart from that it was not designed with security in mind next so if you looked at like this is like these motion sensors like this is the spec called device orientation and motion had like a lot of issues it was based on like regular DOM events so there's some limitations there uh you could not configure them for instance if you want to say yeah i don't really need like 60 frames per second uh i don't want to drain the battery i just need like 20 or 10 uh you couldn't figure that um and because of these like high level concepts it uh and also like super specified it meant that like different platforms implement the features differently there's also no time stamps which is really important if you're going to do like manual fuses sensor fusion and they're like really difficult to extend furthermore like there were just like issues that could not be fixed and ended up being not maintained next so if you look at the landscape like there's more and more sensors in different devices uh like your phone even your laptops have more and more sensors um and smart devices we have all those around us as well uh and many of these especially what we call itch devices uh i usually just sensors um or at least they have sensors in them so with new standards like what the truth we can actually talk to these devices so it would be nice to have like one api to kind of look the same whether these sensors were like remote or on your device itself we also saw that people you know gs uh use sensors for instance there's this project called journey five this allows you to create iot devices like experiences in javascript so a lot of places we we see sensors next so the idea was basically to create like a new like really good api that we really thought about and like try to solve all these issues but also like focus on simplicity and familiarity so that if you know to work with one sensor you can probably also work with another sensor the is designed for the web first and foremost but like we also want to keep other use cases in mind like like i said like on note gs or if there's other places where people are using javascript today so we don't want to tie things exactly into the DOM like you have it has to be a navigator or something because it might not exist in note gs and of course we want to build these for security in mind but the most important is we want to make it really really easy to expose like new sensors quickly because like creating a whole new spec and having to write all these details again and again that takes a lot of work and and you end up making a lot of mistakes so so the way we kind of like when about this was to create like one spec that kind of like defines like everything that's shared and then you create like this derivative spec who like just refers to the other one so like only the things that you care about for that specific sensor so that has been very important next so this is from the spec itself so it's like the goal of generic sensor api is to promote consistency across sensor api's enable advanced use case thanks to perform a low level api's and increase the pace at which new sensors can be exposed to the web by simplifying the specification implementation process i think that's really what we've done next so this is the basic api um usually in specs we use like web idl uh but i assumed that a lot of people are not that used to read web idl so i just tried to show it in in typescript here so the sensor we have like this abstract interface called sensor which like inherits from event talk it uh it has like a few properties like activated whether the sensor is actually running has reading what it has reading and a timestamp which is on high rest timestamp and then it has like three different event handlers so they have like you can attack them like directly like it's like on reading equals something but you can also like add an event listener to reading activate or error and then you have like a start and a stop method that's basically it um when you create such a sensor you can give it options like i'll show that on the next slide um that's why we have like a shared option here called sensor option for the frequency next and could you go over that last part again oh yeah sure so i said like that's why like um when you create a sensor itself like an instance uh you can give it an option like the frequency in things like you see like sensor option here but it's not shown on this slide it will be shown on the next slide if you can change slide so so here you see one example so this is the absolute and relative orientation sensors so if you look at one like you see that the constructor takes an option so this is just like sensor options uh and then they extend with a few extra things like uh this is an orientation sensor so you can get a quaternion back which is really useful for orientations or if you need to like populate like an add-on matrix because you're working with say css then you have like this method called populate matrix so you see that this is the only thing this big basically does is like extend with a few methods next so sensors like you can group sensors in different groups uh so we have these low-level sensors uh these are like the accelerometer the gyroscope uh ambient light and then you have like derivative sensors like linear acceleration sensor which is derived from accelerometer readings uh but then you also have some of these sensors which actually use data for multiple sensors such as like the orientation sensor and you can have different types of those as well next uh life cycle is very simple um when you construct uh the idea of course with sensors you might wonder like why kind of just worry why don't what do you need to like instantiate and then call start well the whole idea is that you shouldn't be able to like fingerprint depending on what sensor you have on your system um so this means that's like when you create a sensor you can always do that whether you have that sensor or not you'll figure that out whether it works when you call start um and this is also depending on the options you give the sensor so for instance you're like well i want this sensor but it needs to maybe get a frequency of 2000 and that's not possible well then it will just fail um so you have like different uh dates internally this idle activating and activated so you see when you construct it it's in idle state uh then when you call start it will go to activating maybe fails and goes back to idle if it doesn't fail it will go to activate it and then if this error or it's or you call stop it'll go back to idle next so um there's also a lot of security threats on these um different sensors like you could do location tracking eavesdropping try to measure eavesdropping just having like a cell phone next to your the laptop people have shown that you can just use nether around it to actually figure out what people are typing because you can measure all the different readings and you know that e is the most common key and then after a while using dictionary you can actually figure out exactly what people are typing um of course there's also the device fingerprinting and and ways to identify the users so we're trying to um handle these threats next so some of the ways we're doing this uh is like ensuring that you have a secure context so this like acps uh it only works with top level browsing context uh at least for now we might like extend it if we have the secure ways of doing that uh if you lose focus well then you stop the sensors you don't have the sensors running uh so we look at visibility states and there's also integration with the permission api so the good thing is that this uh specification has actually been checked by security experts from google intel and also the w3c technical advisory group next so let's look at the motion sensors so if you look at motion sensors there's basically uh three common low level sensors this is the accelerometer the gyroscope and the magnetometer so the magnetometer is not really a motion sensor but is often used together with motion sensors which is why i group it here next so the accelerometer is especially uh measures changes in acceleration uh and it normally does that in three different directions um it's also what is called like initial uh inertial uh frame sensor it means that if if the device is like your phone is in free fall the acceleration in the falling direction would actually be zero this means that if you have it on the table the acceleration will be in upward direction of like the gravity like 9.8 meters per second uh square um so this is kind of interesting this is why you can you can isolate the gravity from uh an accelerometer next so uh accelerometers are usually not that useful by themselves uh because like the data is really hard to use so people often like they they do like derivatives sensors uh audit the diffusion sensors together with others uh so for instance you could get like linear acceleration if you can isolate the gravity