 But not in a pervy way. Chris will be disappointed. Anyway, we're going to be talking about pointers and interactions, a topic which is very close to my heart. Sounds like a dating website, doesn't it? Anyway, so I'm going to start this by just introducing the panel. We've got over here the lovely Patrick Lauker. I always get that name wrong. Have I got that name wrong? Patrick Lauker? I'll do. York Tangdela at the end there. York is author of HammerJS, which is a library that I used very recently. It's wonderful for dealing with touch events and now pointer events as well more recently. He builds mobile apps for a living, loves to experiment with new techniques. Patrick, I didn't tell you what you actually did. You used to work for opera, didn't you? Being a dev rail guy. He's now an accessibility consultant and a member of the pointer events working group and WC three touch events community group. We have Pete Smart over here. Pete, very much UX focused. He's done a lot of incredible work with Viz Cities recently with his partner. I was going to say with partner, I didn't know what to say. Not partner. Rob Hawkes, incredible 3D visualisation. She should absolutely check it out. He's also the author of 50 Problems in 50 Days, which describes my last 50 days organising this portion of the event. Finally, we have the wonderful Steve Workman. I've known Steve for an awful long time and incredible web developer. He's a very insightful young man, I find. Next, we're going to pass over to Rick Byers. Want to introduce me first? No, I don't need it. Rick Byers is from Google. He's a wonderful man. He's going to be talking about pointer interactions. Thank you very much, Rick. Thanks. Let me just get my slides displaying here. Sorry about this. Okay. Yeah, we did it on purpose, sorry. All right. I'm here to talk about pointer interactions. To me, this is about how do we involve input on the web? First of all, even just looking backwards and saying how do we catch up to native mobile? I think there's a lot of ways that input is better on native today that we really need to catch up on on the web, but also looking forward and saying what are the things that are coming and how can we prepare the web to make sure we're ready for all sorts of new ways of interacting with content and applications. I'm going to talk about this from my biased view on the Chrome team. I'm going to talk about the ways in which we classify the problems and the priority that we're thinking about is that you all can tell me I'm stupid and that I should re-prioritize. But first and foremost, this probably comes as no surprise. Our top priority on our team is performance. If your site doesn't perform well to your input, it doesn't stick to your finger. If things don't respond immediately when you touch them, or even on a laptop, if things are janky when you're scrolling, that's a real problem for user engagement. So this has got to be the top priority. But we've also got a big problem with richness. Today, it's possible to build really rich interactive user interfaces, and frankly, it's easier to do that on other platforms. We've got some problems on the web where certain types of UI are really hard to do well without re-implementing all sorts of browser features yourself. Last but not least, we've also got a problem with rationality. This is just the idea of the pit of success, the idea that the obvious thing should ideally be the correct thing most of the time. You don't want to have a million different foot guns where every time you try to do something that seems obvious, you shoot yourself in the foot and something breaks. Like I said, I think we're not here on the web for any of these things yet. I think we've got to fix all of this. But what's really worse is that there's these trade-offs between them, and we keep dancing around. For example, over the last couple of years, with all the folks on mobile, I think we've really put a lot of emphasis on performance, and I think we've lost some richness and rationality as a result by not focusing on it at the same time. So I'm going to talk about a big problem for each of these areas and just talk about how it's affected performance, richness, and rationality. So first one's been talked about a lot to death. I'm not going to spend too long on it, but touch latency is definitely too high. On a touch device, we're trying to present a physical illusion that you're manipulating a physical object. As soon as the latency goes up, so it's not sticking to your finger, or if it's variable latency, if you've got latency jank, then it destroys that illusion, and that's a real problem. It really causes engagement to collapse. There's been a ton of improvements here over the last several years. Probably the biggest architectural change to the platform, even though it's not exposed to the apps, is that most browsers now will scroll on a completely separate thread to try to insulate the scrolling from what else is happening in the browser. There's been a ton of talk about the 300 millisecond click delay. We've talked that to death, but the one thing I'm most excited about here is that we now have a standard way of turning off the click delay on individual parts of the page without turning off anything else. You just turn off double-tap zoom. We've got a new CSS property called Touch Action. That shipping in Chrome 35, Firefox is going to have it soon. Touch Action does much more than this, but this is one of the things I'm excited about, that Touch Action lets you do. We continuously have been making tooling better. I think we still have a long way to go. On my team, we've been working, for example, on trying to accurately correlate input to painting. It's not good enough to just look at your timeline view and say, oh, I'm getting stuff at 60 frames a second if that painting is happening a second behind when the input came in. We've done a lot of work to plum latency tracking data down through Chrome all the way from the input events coming in from a kernel down to when we tell the GPU, hey, we want to display this. I'm hoping that eventually we'll get that exposed in a more friendly way through DevTools. For now, it's there on Chrome tracing if anyone wants to play with it. In terms of richness, this is where I'm starting to focus a lot more. I think we've neglected this space for a long time. As we've done all these fast paths, we've taken control away from developers in the name of performance. Frankly, I think when we look at it really closely, it's not necessary. There's ways that we can give developers control, but still maintain high performance, or at least high performance by default. This comes back to the extensible web manifesto that I think has been mentioned before. As a community, we're never going to evolve the web to be the platform that we want it to be to compete with all other platforms if we keep kind of spending years focusing on these high-level features and trying to agree between browsers and the high-level features. Instead, I think we really need to focus on what are the core primitives, what's the kernel that all browsers are going to expose so that we can enable jQuery in different frameworks and all of you to innovate like crazy and try new things without having to wait for the long drawn-out standards process that really bothers me here. Scrolling is so critical in this experience, but if you want to customize it, for example, there's a great library called iScroll that customizes scrolling in various ways, you've got to reimplement everything yourself that the browser was already doing and you're prevented from doing what the browser is doing. You don't have access to this separate thread I was telling you about. Everything's going to be blocked on your main thread, it's going to make it jankier. Depending on the browser, you may not have access to the precise physical pixel locations that the browser is, because on mobile devices anyway, one CSS pixel is more than one hardware pixel and only the browser knows in many cases where exactly the input or the scroll offsets are. And you might also not have as accurate information about velocity that the browser has. I think we need to fix all these low-level primitives so that somebody like iScroll can go and build scrolling that's just as good as what the browser does with some additional features. Even more immediately than that, there's a whole bunch of UIs that are really popular effects like pull the refresh where when we've made these trade-offs in the browsers and we've said scrolling performance overall isles, whoops. That's right. I guess that means I'm talking too long on this slide. Scrolling performance overall isles means that we've taken control away from you. You can't say, well I want to scroll but now I want to switch to drawing my own little custom effect that the scroll normally has and I want to keep that lock step with the scroll that's happening and I want you to tell me when the user lists their fingers so I can change that effect. Similarly, Microsoft has the snap points feature in IE that is a great feature but we should be talking about how do we enable the web so that people can innovate on features like that without always having to come to the browser, always having to add every new UI feature at the top level of the browser. One of the ones that really bothers me here when we said that scrolling is happening on another thread it's a problem for rationality because you're not used to having the reason about multi-threaded behavior in your UI but it's also a problem for richness and the kind of UIs you want to present. Everything from on a mobile app you never see checker boarding, right? Why does a web page always have to have checker boarding? Maybe for your app you might want to trade off and say, you know what, I would rather have scroll jank than checker boarding or I would rather have the ability to have perfect parallax where things are updating exactly with the scroll position rather than doing the scroll asynchronously to those sorts of things. I think it's been naive of us as browser vendors to say, we know the best default for all scenarios. We're going to trade off performance and richness for you and not give you any say in the matter. I think we really got to start giving control of that to developers and also how exactly should touch behave while scrolling. We're making a big change here to this in Chrome 35 I won't go into the details now but you can read all about it. As soon as you start scrolling, we stop sending you events entirely and that's terrible for richness. It means you can't build UIs like pull the refresh. Okay, now in terms of rationality, one of the topics near and dear to my heart that people bring up a lot is why should we have to build, why should we have to write different UIs for different types of input? We should be able to make it easy to have a single set of APIs that work for all sorts of different types of input. IE has an API called pointer events that we've heard about. We've been working on standardizing it for a long time and when we should be talking is this the API that we want the entire web to move to? In doing things like that, how are we going to make it so that when we're exposing these low level input APIs to web developers, how do we make sure that they're thinking at a high enough level of abstraction that their sites are still accessible, that they're not just targeting screen readers and all sorts of assistive technology as well? And then looking further out, there's all these exciting types of input on the horizon. Even old school stuff like directional pads on TVs, the web doesn't handle very well, but Microsoft has perceptive pixel, these gigantic touch screens where you can have multiple people touching them at once and track 100 different touch points. How should the web interact with that? Or what if you've got multiple users each with their own Wii remote voice input, depth cameras, Google Glass heads up displays, touch screens like the Galaxy S4 that can measure, can tell when your fingers above the screen, haptics. What if your touch screens can start to give you physical feedback? What worries me is if we block all of these new technologies on, let's wait until we have a consensus, let's wait until we can all agree in standards bodies how this should be exposed to the web, we've got a chicken and egg problem and we need to be talking about how do we, what else is next and how do we prepare the web so that we can innovate without having to have consensus first really. And part of this is just discussion, that's what Edge is all about, I'm looking forward to this discussion, but there's also plenty of places online that we should be discussing this. We've just spun up the touch events community group, which is a group within the W3C for people that are having, have questions or issues with touch events and want to see touch events get improved. That's it. Wonderful. Thank you very much. I thought I was going to hit you over the head with that bottle. I'm really sorry for that introduction. I should explain to myself. The reason I didn't really give you a very good introduction is because I wrote it by hand and when I got up here I couldn't read it. Rick is an engineer working on touch screens a point in crime, a member of the point is events working group and the touch events community group. Just to clear that one up. So first up we have someone like Andrew Betts with a question. Is Andrew there? So web apps that use touch gestures can have problems with iframes swallowing touch events. Is the web developer hamstrung by their inability to exert complete control over user interaction in ways that native app developers can? I guess we do give over a lot of power to third party control, third party websites in iframes to the browser. What do people think about that kind of giving over that control? Should developers have more control? Start with maybe Patrick. It's a difficult one. Developers will always want more control. Be it with CSS stuff. I want to control exactly how my users will experience the concept I'm creating and the same way with inputs as well. So output and input. There is an argument to be made that at certain points we should limit what the developer can do to mess up the conventions of the platform, for instance. You don't want every single website to just behave in a slightly different way and the user has to relearn it just because the developer thought, A, this is really cool. I can do scroll jacking but now extending it to touch gestures and everything else as well. There should be some kind of convention, but on the other hand the topic we touched on at the start is that we have a lot of these cheap laughs, I promise. We do want to innovate, so we can't at the same time also say, I'll say we even though I'm not in that fold anymore, we as browser developers know what's the best interaction model. Developers will want to experiment with new things. So it's trying to strike the balance but there will be situations where probably for security reasons you don't want to have websites that completely take over the interface and start showing things that the user thinks they're doing one thing but it's actually doing something else in the background. It's a tough balancing act to make, I would say. Jorric, you're developing hammer. Do you want to give more control to developers? How does that work? I would like to see a lower level API when a new device comes up like the Myo or the Wii modes. I just want to get that information in my browser. So I can build my own gestures and don't have to wait for four years or something. But when it arrives in the browser and for everyone. I'm obviously a huge fan of giving developers more control but the scenario they mentioned in this question was about iframes and we actually, I think it's fixed in all our stable releases now for a while, actually quite a long while where the touch of any API will tell you what other fingers are currently down and so it was actually possible for an ad or something running in an iframe if you touched on it to receive information about where your other fingers were and that was a potential leak of privacy and a security concern and security is kind of the thing that I'm saying we shouldn't focus as much on performance and we should be trying to maintain our performance but give the developer more control of security is the thing that trumps all of that. I suppose if the users don't have confidence and it's the unique strength of the web. Will you take the opinion there that with gestures that you should be giving developers kind of more control. You said that in the talk as well. You want to be giving lower-life APIs. What are your thoughts in terms of UX around web design and web development? Do you think we should, is there a benefit to be had for gestures which are universally across an operating system, across a device? Is there a reason why developers don't go around and fiddle around with interactions? Interesting. The conversations that I have with developers often go like this. Guys, it would be great if we could come up with this really innovative new feature. Why don't we try scroll-jacking for example? Why don't we go into that realm and see what's possible with it and how we can surprise the user and get them to reconsider the experience that they're looking at. I think the common response that I get is to arrive at the natural default of the browser. We don't want to confuse people. After many arguments, I would probably side with the developers on this particular front. Although I want to innovate, I want to create experiences which people would find themselves immersed in, surprised by and excited by, I think there must be conventions which exist across different platforms, across different browsers which are therefore expected because not everyone wants to be surprised. Not everyone wants to have this kind of immersive experience. Most importantly, people don't want to be confused and I think that is the bedrock really. I think one of the reasons developers respond in that way is all too often they can't do the one little thing you want to do without re-implementing the fling physics or something. To me, this represents a fundamental lack of layering in our platform. Many other platforms, for example, I know on Android, you can replace little classes or hook into the process but you have to re-implement everything yourself so you can get the native look and feel but also customize it slightly. I think we failed at that in the web. I think one good example where potentially there has been some improvements just with some little personal experiments maybe is, I don't know if you guys experienced this but you're scrolling through a web page and you suddenly get to a full screen Google map and you're on your phone and suddenly you're no longer scrolling the page and you're suddenly inside the map and that's an area which we've tried to innovate in around that which allows you to declare when you would like to interact with the map and actually when you are still scrolling and I think there should be space to allow developers, designers to innovate in those areas and try and break away from the iframe suction of doom that you go into but there's got to be a limit I think convention is also really important. You're quite a good mixed bag of dev and design. What's your thoughts? With things going into with all our touch events going into iframes one of my concerns is if I were to make say a web component talked about today if I was that web component developer I would obviously want all the touches to be sucked into to what I was doing because I said ok I really know what this component needs to do so I should have control at this point why are you trying to do the rest of the thing and the more this goes on with web components and with other other areas advertising of course iframes maps is a great example the more this happens the more important that actually having that kind of override is going to be so that when you do as a developer know better and quite often you do know better then you can override that and you can actually make a difference to the web application and improve the user experience. Andrew if you if you have a mic has anyone got any thoughts on this at the front row because you look like you were doing something else there Andrew pay attention so it's kind of touching on your point and you actually the idea is actually a lot of times the interface the developer is building it knows best and for example you're a tangible example about like the Google Maps we actually give them about the exact same thing there's a response website, there's a big old map but what we did is actually once you went into one layer and snapped downwards we actually changed the Google Maps so it's no longer dragable, the interface gets changed it's now double tap the zoom but I think that would only know you can only really do that at the developer level you can't really do that automatically I think in the browser there's too many assumptions because there could be other pointers where I don't want that to happen for whatever reason maybe we would want an immersive experience there's a notice of the developer making it to make decisions about that absolutely you have to design around what you're doing I did the Met Office Met Office Weather Observation website and the mobile view for that and most of that is a big Google Map and the really important thing was on the different browser sizes actually giving you space for your fingers down the side of the map so that you could still interact with the map but you could also still get out because otherwise you're just literally you are stuck next question we've got Andres Brovins at the front here a nice simple microphone there This is a question about gestures Is there a case for custom gestures that users would be unfamiliar with rather than standardising a set of gestures that are semantically well understood? Your library is a lot about gestures What do you think? You should I guess you should be able to write custom gestures but it also makes sense to just use the system default gestures because the user is expecting like you can swipe and it acts the same on that page like the other page but if you want to write a custom gesture like swipe with with I don't know you still should be able to handle those things It's a really hard writing gestures It's really complex I've not done it so much with touch but definitely with things like connect trying to understand what someone's doing and how it follows through especially if it's really complicated to do maybe not for the uninitiated Is that your experience? It can be complex What we often find is that the UX is actually harder than the engineering We actually had a bunch of touch gestures when we first introduced the Chromebook Pixel and we put a bunch of work into the engineering because users didn't know about them they were hard to discover it was hard to train people to use them and ultimately we said we should stick with the simple things That said I think it's essential we can't innovate we're never going to come up with what the new interaction modes are unless we give people the power there's going to be some the next viral app that's going to have some cool thing that you do in your game to make it do something different it's going to become the standard It's an interesting point but our developers are necessarily the best people to make decisions about gestures generally they're probably not and if you give people really I don't mean to offend anyone but I am a developer myself so I think that it's very difficult to write gestures which are well understood across the system It's definitely more of a UX question I would say because from the technology point of view I mean not in point of events as such specifically about the Microsoft implementation but there is a separate part that Microsoft has in IE that isn't specced the W3C which is all about the gestures and how to actually write programmatically what a gesture is picking out say two fingers that you've put on the screen and then following any changes between them the angle etc so technically I think that is not an issue we can and we do with also with libraries like Hammer write our own gesture code and it really is more a case of are there standards I mean Luke W has documented a lot of the standard gestures that you get on a variety of platforms there are similarities but also quite fundamental differences in some cases if you go from iOS to Android for instance there are a lot of pinch zoom that kind of stuff nowadays we know about it it's become common knowledge but when it was first introduced nobody had that understanding that that is there so UX wise there is the argument of how do you teach your users to that there are gestures how do you hint at gestures without doing a big before you can use the app here is a ten minute tutorial on how you move how you shoot how you go into your inventory kind of stuff it's more of a softer issue I would say so from the technical point of view gestures are here we can create gestures we can hijack pointers we can hijack finger movements but should we it's usually more the fundamental question Steve I know you're quite involved in the London sort of web standards meetups and groups do you think we need more standardisation in this whole world of gestures there's an awful lot of proprietary technology in this is there maybe room for more collaboration I think so definitely so there's a lot of work going on in the different standards organisations in kind of different routes going through this obviously Microsoft's implementation is one another is one from Apple so we haven't talked about enough yet today who are doing this spec called I think it's Mondo UI we talked about it last night Indie UI which is not a gesture thing specifically but relates to actually system wide commands so like you were to do undo it would also trigger an undo kind of action or a named undo action throughout the web as well and that kind of thing also then expands into gestures at the same time so Apple is going a completely different way of this from Microsoft and in the end these people need to talk to each other and we need to standardise something like this otherwise we are going to get to completely different implementations that are probably going to be incompatible with each other but is there a case for custom gestures I would say definitely yes I would say that especially when you stop considering traditional point and click interfaces and touch interfaces and start thinking about the new input devices that are coming onto the market at the moment things like the Myo which reads the electronic signals in your arm for example things like the leap motion when you are interacting in 3D space I think these give us fantastic opportunities to start exploring custom gestures some of the work that I have been doing with very talented Rob Hawkes should probably be sat here rather than me looking at how to do gestures in 3D space and you can't just simply re-create and touch gestures in those spaces where we are trying to create 3D cities simplicity for real life how you extrapolate out a city and how you break it down and see different parts so we are talking about gestures like this where you are able to pull apart things like being able to pull apart a watch that famous diagram that we all see I think there is a real case especially as we start to look at new forms of input to look at gestures which are more suited rather than replicating two dimensional gestures that we have now Give developers power, standardise more talk more We are I have a shabba Don't have a second name My question is about browsing the web on devices like Google Glass or Auto 360 the watch is a very different experience as they do not have the traditional methods of input how do we develop or adapt our web pages to better support such devices would a device information API be a good idea You were talking about new devices I was talking about new devices I would come at that question from a high level I am sure these guys when we talk about the technicalities but I would probably start with for me what is the most important thing is the user need or the task that that particular person is trying to achieve on the device that they are using and we might not even know the task for example so do we need to be agnostic I think we also need to consider things like with Glass for example that we have to display information it is very very small and the input as well we are looking at non traditional forms of input we are talking about voice in that particular case so if we are talking about browsing the web on something like Google Glass what should we be looking to do I think those three things come into play at that point You were talking originally you were talking about this I don't know about the UX side I need someone like Pete to tell me what the UX should be but then what worries me is how do we make sure that people can experiment with these things we can't wait for whatever Glass events W3C specifications and maybe we can do this with low level APIs if we can address the security issues maybe we can give some limited raw USB access to pages or maybe we need something like into UI events just a higher level semantic and just be able to say someone is manipulating this object by rotating it and scaling it it can be completely different from how they do it on a touch screen and the browser and even the page doesn't need to know there's also this chicken and egg problem you've got to design for compatibility most websites out there are not going to be designed for your new watch so you've got to come up with something so that you can do a really good job for existing content will still allow some incremental extension to do a little bit better for sites that are really designed for your special type of input I agree we obviously talked about this over the last day or so as well and on the pointer events mailing list while we're working on the standard and on the one hand it's really good for a developer that we're now working on these low level APIs I want to get access to is it a finger or a stylus or is it three fingers and what are the X and Y coordinates and that's really great that I can do that as a developer but in most cases if I'm developing an application generally I don't really care how the user is touching the touch screen with a nose or with a finger or any other body part or any other device or if they're using voice what I really want is usually the intent the user intends to activate this button the user intends to get more information or manipulate so even though pointer events are a step in the right direction I think there might already be a little step too late because now we already have touch and stylus and mouse and everything else there's all these new interfaces and really we should be looking more at high level stuff Indie UI is probably a good step in the right direction it's been a bit sidelined because it falls under the main its accessibility so it's just for blind people and stuff so we're not going to care about it so I would say have a look at Indie UI stuff it actually abstracts a lot of these things that in most cases unless you are trying to create something that has a custom gesture or that really takes advantage of something that can only be done with say elite motion if you really just care about the user wants to open this document or manipulate this thing or re-sort this table you want the intent you don't want the actual raw bits and bobs of which key number was pressed which key code was pressed because you're just going to end up in a situation later on as a new device comes out that you're going to have to reinvent or re-implement a whole new model which is what we've seen with touch events it was a great idea but all of a sudden then something new comes along and instead of which was a really wise decision I would say for Microsoft instead of then saying ok we're going to have stylus touch and whatever else touch connect touch they decided to abstract at least anything that's pointer like but it should really go a step further it should have included keyboard in my opinion and just be more device agnostic in general and yeah Indie UI is probably not perfect but it does move towards more that idealised goal of just looking at intent rather than raw gobbins so we've got a question I think Chris was with his hands up simulation makes a lot of sense simulation makes a lot of sense as well for example I've written a few things for leap motion and instead of doing my own handlers I just fired a click event after that so if somebody wants to build a new UI that takes this in all they have to listen for click events and click events are great cos they're also keyboard events so instead of reinventing and putting more and more event handlers on our interface for all the possible things just firing or generating an event that is already listened to is a simpler way in than having another library around that absolutely it's like focus blur and click are probably the ones that if you want to do something today that will still work in a few years time with whatever other device is there they're high level enough and they're abstracted enough from the actual what is the type of input that it will work the reason that things like touch events and even pointer events have to fire other mouse compatibility events on mouse over on mouse out and everything else is mainly because I think we shot ourselves in the foot at the time when we started inventing these very device specific event handlers and that's why now just to make sure that the web as it has been written and 99% of the web that's already out there to make it actually work on any new device any new input needs to at the UA level fire these compatibility events which is a shame and we don't want to end up perpetuating this that in five years time we're going to have to end up when we're using our minority report until your arms fall off kind of interactions that all of a sudden they have to emulate from that it needs to emulate pointer events which then need to emulate mouse events just to make sure that you keep cascading back to the old technologies just quick questions and quick comments sorry for the run it's kind of a question rather than a comment I might maybe a caveman but the pointer events and touch events I get they apply to PCs and smartphones that have browsers in them quite often made by the companies that put the browser vendor on the device this watch can get a connection to the internet but there's no standards on that device and the leap motion to get it to talk to a web page requires something in the middle to get the connection to talk to anything on the web requires something in the middle and I might be missing this but what devices are actually talking web languages like web compatible languages and if we're talking about these pointer events for these kind of or any kind of standardised event for these kind of big we're talking about minority type minority report type interactions I don't I might be missing this but where are the devices TVs aren't getting there they're not TVs are going to fire pointer events so if I'm kick me if I'm dominating this we'll get on to this later I think we'll go to a new question we've got Matt Andrew there's a very similar question coming up to you boring sorry laughing at him not this is from Patrick Law touch events are a very simple mechanism for touch whereas pointer events are a far more detailed abstraction should a browser vendor ideally implement one or the other offer both separately or some kind of combined as is my own question I'm not going to answer what do you think there must be a problem for browser vendors because you've got to support touch events because that's what everyone knows and it's so easy to tighten to what already happens with clicks and we can't stop supporting something we've already supported and it's not even as easy as saying well on the page we'll decide one or the other because you've got to make sure there's a transition path for libraries that operate within an existing document so google maps for example right if we said the page is either sending pointer events or touch events then google maps could never use pointer events the whole page would be broken so there's this difficult transition path I think firefox is planning on supporting both touch events and pointer events I'm really optimistic about that I'm hopeful that at some point we'll do that in Chrome as well we really want to make sure that we're only going to support APIs that really last that are lasting APIs that eventually all browsers are going to support and as long as the juries kind of still out on whether or not all browsers are going to support it we want to be a little bit careful to make sure we're not introducing something new that's largely redundant that ends up not standing the test of time so we're really looking for feedback from developers to make sure it's something that developers really want and intend to use for a long time and then we'll support both but we can never get rid of the stuff we've supported previously so that's probably exactly my problem so the uilleadonyell.com big website if we break something it's going to break hard so if I were to try and convince my boss to say okay I want to implement pointer interactions it's going to say okay that sounds great what browsers is it supported on I'll go IE 10 and 11 and I'll probably stop there because right now that's all that supports it Chrome will get there which is great Firefox will get there which is great Apple which is Apple Safari which is let's make about 25% of our user base they probably aren't going to implement this spec and so if I'm going to spend what will take me a couple of weeks to completely tear out our interaction model and put pointer events in there instead even pointer events within all the back polyfills in order to make it work on Safari that's going to be a massive amount of engineering effort for something which probably isn't going to give me any business benefit whatsoever except that it might get rid of 300 millisecond click time and that's going away with something else anyway so as from an implementation standpoint and a purely business standpoint actually selling pointer events is quite difficult unless you're making a Windows 8 app in which case go for it and if you are making Windows 8 speak to me at the end of the event I mean you're doing with hand gestures you have this problem you must be dealing with all the time well when you implemented pointer events about having to deal with both at the same time yeah it's really yeah it's like when you use a touch event it sends touch and mouse to the next to each other and sometimes in a different order in other browsers so you can't really find out when it's like a touch or a mouse and then you have the pointer events but it only works like in the latest internet explorer versions and it's hard to I was looking at the triage of all the different events that happen when a touch happens on the screen I mean you must live this stuff happens you have to put it in there because of legacy behaviours and so forth it's extraordinarily complex how do you how do you work around that sort of complexity who's heads it in so first of all we've got to do better to document it I think there's a question coming up about this I think the touch event standard didn't really document any of this stuff on retroactively to try to take the majority of what the existing implementations had done and you know what when I realized two years ago that touch event standard didn't have anyone from Google on it that I realized there was a problem there it was all retroactively trying to document what had been done without trying to real point of standards is to get the rationality in there from the start so hopefully I think we're doing a better job with pointer events we just defined pretty precisely and then Patrick came along and said let me give you an example and show here's the exact things you should expect I think that's the way forward we've got to do some of this stuff for compatibility but hopefully we can rely on layers on top to make it simpler so your UI framework hopefully we'll say on this UI framework you just got to deal with the events it generates and you don't need to worry about different browsers and their old compatibility modes and what the differences are between IE and Safari and hopefully we can smooth a lot of this over with frameworks I think a lot of the complexity they get nowadays is coming from the fact that we've made bad design design decisions in the past mentioned already on mouse over on mouse out and everything else which is the reason why touch events need to fire mouse compatibility events because we as developers were told years ago to do a hover effect you just do an on mouse over and we've all been taking it and cargo culting it around and now that content is there it's set in stone and these new devices need to use that if we could start fresh we change it but the reality is we are living with both that legacy content and we're moving towards a multi input world I mean already now we've got devices a few years ago it was unheard of to think oh you might have a laptop that also has a touch screen touch surely means it's a mobile or then it was or it's a tablet and now it means the fact that you get touch screen event there is a touch screen it doesn't say anything further and that's the fire alarm test there is a fire alarm test which we couldn't avoid today attention the public address and fire alarm systems are about to be tested okay do we have your attention will sound first followed by the evacuation message right okay should we just run around and panic do not take any further action no action why if a fire breaks out while there is a test no one start a fire we didn't start the fire it was always burning attention an incident has been reported with isn't a building I repeat it hasn't been announcements right it's like space invaders this is good though attention please we have an emergency an emergency double plus good it's not just an incident anymore this is escalating test is now complete yes last bit at the end Martin that last bit at the end was brilliant if you have problems hearing this please alert someone accessibility we'll talk about that next I can't remember where I was we are in a multi import world if you can't remember let's just go to a new topic Danny Croft do you have any topic because we have no idea where we are now are pointer events over complicating touch interactions no alright I'd like to echo no we've got seven minutes of this guys no no they I'm aware of that and it probably shows up quite a good idea also really really tell the people which got the fire alarms please don't do it anyway so yeah but I mean are they over complicating there is an argument though from a developer point of view playing devil's advocate here obviously not as a real developer from a developer's point of view it is quite a complex thing but is it better to kind of with touch events you kind of we papered over the cracks of we needed touch we kind of got it in there I think with pointer events we took a step back and we looked at well what are all the different types of interactions that could happen on a device realistic interactions that could happen on a device I mean is that are we making too much of it though because it's all good on paper this abstraction but is it too much of a complicated abstraction I think if you think of devices strictly like you've got tablets and you've got laptops then maybe you could make an argument and say pointer events is over complicating it but Microsoft believed that that's not the only two kind of devices we're going to have and we believe that at Google as well and that we need to be prepared for devices that are a hybrid and like there's a continuum between them and then the complexity of having touch events and mouse events on these kind of devices is really high like you probably don't realise the extent to which touch events implicitly capture to where you start touching and mouse events don't capture at all and the implication that has for when you remove a DOM element that is in your event chain somewhere you know there's very subtle bugs that result kind of from these differences where pointer events unifies it all and says there's one model if you can you don't need to worry about any of these old things just target this new model absolutely the complication comes from when you try and add things on top of pointer events so if you say we're to look at the TVs it's trying to add the D-pad into that which it currently doesn't do then you're trying to fire two models and that's where you kind of get into much more complicated stuff if you're still thinking about the polyfills and everything, HammerJS solves pretty much all of that for you so it's just when you're adding more and more things on top and as we've touched on already if you're trying to invoice onto the top as well you're trying to have a UI that reacts to probably far too many things and that's when it gets pretty much overly complicated Well I think with pointer events the nice thing is no it doesn't over complicate it it actually simplifies it in most situations because already with touch events if you wanted to properly separate it you'd have to listen for the normal mouse events which work in a certain way and then you'd have to also listen to touch events which work in a slightly different way because Apple couldn't be asked to just do something that's a bit standardised and had to invent their own little things and then not standardise it and then threaten people with patterns about it and instead with yeah okay anybody for that for you whereas with pointer events the sanity is kind of a return it extends mouse events so any of your code that you wrote already for mouse just works in 90% of the cases and again as a developer if you don't really care how the user pressed on something activated a button whether it is with a finger or a stylus or a mouse or whatever if it's Xbox One whether they use the joy pads or the connect kind of touch thing or even voice I believe it would just fire the same event that says this button was activated so it unifies it however if you want to know explicitly this was actually caused by a finger or a stylus or a mouse there is a way of reading the attribute of that event that was passed on so that's what's been extended so it gives you the best of both worlds you couldn't write completely to a certain extent completely agnostic input agnostic code and just forget about it and hopefully if new types of devices come along that also use pointer events they will just work rather than I've only got mouse and touch now I need to have X and if you do want to do something very specific it still allows you to do that so I think it's a good compromise it's not perfect as I said keyboard for instance you still have to handle that separately or just go for high level events again focus blur and click but it's at least a step in the right direction and it's more sane than inventing something similar to touch events but now just for stylus so we have a question for the floor so Apple's Touch API has a really really nice property that the DOM element that's received touch start will always see the touch end event and pointer events seem to have regressed to the previous mouse down mouse up situation where it's very easy to miss the mouse up if the pointer just ends up being outside the element is there a plan to have an easier less complicated way for pointer events for catching the up event so first of all the property you're talking about touch events is simple as you alluded to I think the way you worded it is actually incorrect it's not true that the event that received the touch start will receive the touch end the element that you touched on received the touch start you might have an ancestor that's actually listening for the event handler and receives the event and then if the DOM tree underneath that ancestor gets moved the ancestor will never see the touch end and he'll be surprised and I've seen this bug in practice and people are really surprised by it and it's one of the disadvantages of this implicit capture model is that the programmer hasn't told the browser to capture the events to the browser just says well you started touching here so clearly that's the element that wants it even though it's really your handler further up that really cares about the event stream so pointer events has an explicit API called set pointer capture that lets you say if what you want is to track the finger no matter what's underneath it then you call set pointer capture and tell it here's the element that I want to receive all the events for as long as this fingers down and so it is one more step but it's at least explicit because it's the exact same model it builds on top of the exact same model we're used to with most you still get pointer enter pointer leave events that you can still say well if the user draped their finger off you just have to remember to watch for leaving as well as ending I've got one question and there's texture Rooney so you guys have mentioned a couple of these things so it's a sort of overall question about what might become mainstream so D-pads, TV screens and multiple users interacting on a single screen are all scenarios we currently consider non-traditional which of these will become mainstream most quickly and why do you think that? This is all of them we always talk about these new devices and these modern monority report devices but they haven't got great market share which ones do you think could go become mainstream? If I knew I think our standard idea would be so much easier isn't it? Awkward I'm allowed now I don't work for a browser anymore But it's because we don't know that we have to rely on the community to innovate and experiment and try new things and see what becomes popular I really want massive touch screens to be popular I think there's a real possibility when you have 80 into touch screens that's really cool the collaborative web experience I mean, leap motion What do you think about leap motion? Do you think it's ever become mainstream? The problem I have with leap motion is it doesn't really solve an issue If I'm like 10 centimetres away from the screen I can touch it but with a connector you're 10 metres away you're solving a problem We've got some waves I'm going to debate the validity of the leap motion I think it's got a lot of great things that you can't do with touch like depth for example and being able to kind of zoom in and out of things which I think are unique to that particular type of input What else would you like me to say on that matter? What other devices do you think can come out? Is there anything which you think is actually genuine has become mainstream? Maybe voice, that's not really it's kind of mainstream on native devices Voice is getting there but the way of voice is actually getting somewhere is the Google Now stuff so if you're seeing the Google Wear stuff this week and if you've ever played with Google Glass the voice commands are by far the best part of it and now that's starting to make its way on to native Android hardware through the whole OK Google thing that's starting to work really well and with the Xbox One as well it's getting there, it's not quite right yet but it will get there so actually the voice input is going to start being important on the web you're just trying to use the same kind of APIs underneath but you don't really want to talk to your computer it's more of a user experience thing So I've got a question just from the So with regards to actually I was going to try and interject with what Remi brought up earlier with regards to where these kind of devices with regards to the required custom gestures and things like that I'm actually quite in a unique position because I actually know exactly where they're on and for those of you who aren't aware it's basically any digital display you see there's an advert out on the street and the underground things like that that's an industry digital at home I'm experienced with it and there's a huge amount of money going into that and that's actually a symptom of how capable Chrome is Chrome is actually and to a lesser extent the other browsers but for some reason Chrome's managed to capture is the actual platform that's driving a lot of that basically a lot of those displays if they're ever interactive and they're all with new gestures so that's things like where the hammer dress comes in because as a tangible example when you have a device that you're swiping left to right with if it's a display on your phone or actually a small device you swipe left to right with one finger if you actually have a large touch screen and you walk up to it you swipe with two fingers we've actually noticed that people do it they don't realise they're doing it but they're actually right there because it's out in the world and if you're ever doing things with our installations and things like kiosks and you have them a lot in trade shows where you have digital displays they have games, they're capturing weird interaction events they're all powered by the browser and the advertising industry that's today, that's not a weird thing that's coming I'm not kidding, every single month we have things like that going on and then we're kind of building these things We're talking like kiosks though and kind of installations that's not mainstream, that's not consumer technology in the web sense really I mean we're using it definitely as consumers That's the thing that technology's moved into this non-web scenario but if you take for example the crosswell TV ad network which is like the visual displays you get on the underground some of those now are actually wifi mobile enabled with your phone you can actually get out and through the web medium interact with the display you're using web technologies but you're not on the web but actually you're now passively interacting with it using gestures on your phone to control a big display with that, I think there's an awful lot to learn in terms of we should perhaps talk a lot more We should talk a lot more Let's take over the next session I think it's really good actually in this point there's now a lot of collaboration which is happening in such events groups and so forth and there's a lot to learn, I don't think anyone really knows there's the answer to a lot of the questions that we're asking some of these we're going to figure out as new devices which will become mainstream one day Ram Never So I want to just thank my panel and thank all you for listening Thank you