 Hi, my name is Emily Roberts and I am a Developer Advocate for Chrome OS. Today, I'm going to be discussing input and what matters to you as an app developer. There is a whole world of device form factors out there. Foldables, rollables, convertibles, tablets, laptop devices, in addition to phones. With all of these form factors come different sizes and shapes of screens. We have a number of talks this year specifically about large screens and layout designs, so check them out. However, I would like to talk about the other 50% of the user interaction equation, the half of IO that is sometimes neglected. If output is the O, then the I is input. So, what's the big deal? Shouldn't input just work? Sadly, not always. Many standard UI controls will work as expected automatically for users across devices. But there's a limit to the extent the framework can guess how your app should respond. Without proper input support, your app could seem broken, could be no fun to use, could be inaccessible to some users requiring accessibility tools. And lastly, your app could be missing out on creating compelling, engaging, and differentiating user experiences. So, my call to action today is really to think through input when designing your app, right from the base. The reality is that users are already using your app with mice, keyboard, stylus, and more, and not just touch. Embracing input as part of the design means your app will be more intuitive and delightful to use. Okay, let's start simple. And this is a situation that arises often when an app designed for mobile is run on a Chromebook or on an Android phone or tablet with a Bluetooth keyboard connected. There is the text box, the user types a bit, the user presses Enter, and nothing happens. When a user's hands are already on a keyboard, they will assume the Enter key will work. It is not intuitive to lift your hands off and poke the laptop screen. And don't forget, some Android-capable Chromebooks do not have touchscreens. The fix is easy and can make a huge difference in terms of a user's app experience. Let's take a look. Here's the code you need to handle the Enter key press. First of all, notice we are overriding the activities OnKeyUp method. OnKeyUp listens for when keyboard keys are released. There is a corresponding OnKeyDown for when a key is pressed, but in many cases, it is easier to use OnKeyUp. This is because OnKeyDown will detect if a key is held down and will send multiple events. Next, look at the event's key code to see which key was pressed and released. In this case, we're looking for the Enter key. Then, take action on that key code, in this case, calling the sendMessage function. This will probably be the exact same function you would have called if the OnScreen button was pressed, so there is no need for a new code here. And that's it for triggering app functionality, but you will need to do two more things. First, be sure to indicate to the system that the Enter key was handled by your app by returning true. Likewise, in the event that your app did not handle a given event, pass it up to Super to allow the system to take action on it. For more information, check out the Android documentation at developer.android.com. So, it's 2021. If users have their hands on a keyboard, they will expect basic keyboard functionality, so the space bar should work for pausing and playing media, and undo and redo shortcuts should exist when it's appropriate, like in a text editor. However, I'd like to invite you to think more deeply about your app's input and think about how you could fundamentally reshape the user experience. Here is an example of MWM's EDJing app. Let's talk about what we saw. EDJing works with touch on phones and tablets, which is great for casual users or even power DJs who may want to use their phone or Chromebook while riding the Metro home to build out their set for later that day. For someone who wants to get farther into DJing or who wants to do the music for a party or a wedding, MIDI controllers are a great option. They offer a nice low-latency tactile feel and makes it easy to crossfade scratch and apply different effects and filters. And this is all possible with the exact same app as before on the same device, just now with a MIDI controller attached. Even pro DJs might appreciate the flexibility when they can't carry all of their vinyl records around with them. And this is my favorite part. A keyboard is essentially a low-latency tactile input device, right? Kind of like a MIDI controller. EDJing took the time to think about Chromebook users and realized that they often have a keyboard and trackpad already attached. So why not turn those into a built-in DJ controller? They included keyboard mappings for all the major actions and effects, as well as, and this is cool, trackpad-based scratching and crossfading. It's not quite as fun as a full MIDI controller, but it's a lot more portable. Let's look at how to support these different input methods in your app. MIDI coding can be daunting if you've never worked with it before. Luckily, there are some handy open-source samples in the Android Media Samples Library that can help you get started. Check out the MIDI synth project at the link here, and in particular, the synth engine module in that project. Also, just a note if you use C++, starting in Android 29, so coming soon to Chrome OS, you can also use the NDK MIDI API. Adding keyboard functionality is just the same as we looked at before with the Enter key. Here you can see checking for the WA and L keys and the corresponding app actions. As demonstrated in the app, trackpad input can add really cool, two-dimensional tactile experiences to apps. Here's a simplified example of some code that would take trackpad input and convert it into a control signal for a record scratching function. First, look for generic motion events. Then, record the change in the X and Y position of the pointer. Then, calculate the hypotenuse to get the total distance traveled by the pointer. And then, do something with that info. Here, we are scratching the record. And that's it! We are going to improve this code later on, so stay tuned for that. With so many great input options, another issue arises. What happens when users switch between them? Maybe I start playing a game in laptop mode, then move over to the couch and flip it into tablet mode, and then later on, tent the device, put it on the coffee table and attach a game controller. Or another situation, and I was fortunate enough to have taught high school and witnessed this firsthand with my students. Some people use their devices very creatively, twisted sideways, using the trackpad and touchscreen at the same time, and then occasionally hitting keyboard keys too. It was quite wonderful to see, but the question is, how does your app handle something like this? The number one thing is, support your users. Let them use whatever input devices they want whenever they want. This means your app should always respond appropriately to any supported input. If a user has a game controller connected, they're clicking the keyboard and the touchscreen all at the same time, no problem. Respond to it all as expected. Things get more complicated, however, when it comes to UI. There is a distinction between input events coming in and what is currently being shown on the screen. For example, some touch-based games might have an on-screen joystick, like this pretend racing game here. If the user is using the keyboard to play and not touch, you probably don't want to use up-screen space with that on-screen joystick and could just fade it out. There may also be situations when an app or game has different prompts that are dependent on input. For example, press M for Nitro with the keyboard attached, but press X with the game controller and click here for a touch interface. Here's a look at a game that handles this well, Dead Cells. When using a keyboard and mouse, the prompts and text show the keystroke and mouse button indicators. For touch input, you can see the on-screen joystick on the left, a touch icon to interact with items and characters in the middle, and on the right, the jump, crouch, and item touch targets. Finally, for a game controller, all the UI elements correspond to the appropriate controller buttons, as the user expects. Cool, but how do you implement something like this? One approach is what I call a lazy state machine. It is a prioritized state machine that feels lazy because although input events from all devices will always respond immediately, the UI changes may be slower to transition. Let's get concrete. Here you can see a flow chart for three input states, touch, keyboard, and game controller. Decide on the priority of each input, the number one, two, and three here. For the race car game example from before, or like with Dead Cells, if the user is using touch at all, they need the joystick and buttons on screen, or else the game might be hard to play. Even if keyboard events are being received, if the user is touching the screen, that on-screen joystick needs to be shown. So the touch state gets the highest priority, number one. When does the game move out of the touch state? If the keyboard is receiving input, but there hasn't been any touch events for a while, let's say five seconds, it'd be nice to fade out the unused touch joystick and buttons to maximize screen real estate. So keyboard input plus no touch input for five seconds equals transition the state machine to the keyboard input state. Again, the moment there is any touch input, we should immediately move back to the touch state. Likewise, if the game controller is receiving events and neither the touch screen nor the keyboard has received input for a while, the UI can move into the game controller state. The instant there's any keyboard input, it should move to the keyboard state. Or if there's any touch input, it should move to the touch state. This five-second lazy delay before transitioning to lower priority input states prevents the UI from flickering back and forth in the event of someone using multiple input methods at the same time. A last observation. In this model, the app is only reacting to actually received events. So you're not trying to check if there are keyboards or Bluetooth controllers attached or in any way trying to guess how the user might be interacting. Trying to do that will not always work and it will be slower to respond to changes. It is better just to act on real received input events. Okay, that was a pretty quick overview of this concept. For more detail and code, check out our documentation on chromoes.dev. On the subject of game controllers, chromoS supports all the button mappings that Android does, including Xbox, Xbox 360, PS3 and PS4-based controllers. And in addition, we have added mappings for other popular controllers like 8-bit Do and Logitech to chromoS. We then work to get these new mappings ported back to the Android framework so other Android devices can benefit. Actually handling gamepad events in code looks a lot like the keyboard handling code from before. The difference is just the key code. Here's some code looking for the X button and the left arrow on the D-pad. Also with games, you often want to know if a button is being held down or get that extra bit of responsiveness by not waiting for a button release. In these cases, you might choose to use on key down instead of on key up, as mentioned before. Handling game controller joysticks uses a different method override than for buttons and D-pads. For that, check out the game controller documentation on developer.android.com. An Android feature that really shines on Chromebooks is pointer capture. This is when the mouse cursor is captured by the app, meaning it is no longer visible on the screen. Input events go directly to the app and the cursor won't get stuck on the side of the screen if it goes too far in one direction. One game that demonstrates pointer capture while is Minecraft Education Edition, which launched on Chromebooks for schools last August. During gameplay, you can see the user is able to direct their point of view with the mouse or trackpad without a visible mouse cursor. If you act with the building and learning menus, however, the mouse pointer reappears as you'd expect. Here's some code showing how to implement pointer capture. You'll recognize our DJ scratching code from before in the middle there. With our previous code, imagine scratching a track and seeing the mouse cursor going all over the screen or potentially getting stuck on the edge and throwing off your groove. Pointer capture is perfect for this use case. To hide that cursor, let its movement be unrestricted and still get us the motion events we need. Instead of on generic motion event like before, let's use a captured pointer listener to respond to the motion events. The only difference here in the actual scratching code is that the X and Y motion event values are relative to the last motion events received instead of being absolute screen coordinates like before, which makes sense as the pointer is no longer bound by the screen. This means that the X and Y values are already relative deltas to the last motion event, so there is no need to maintain and subtract the previous values. The rest of the scratching code is the same. When scratching should be triggered, call request pointer capture and when it is finished, call release pointer capture. That's it. For more information and samples, check out the Android pointer capture documentation. The last type of input I'd like to talk about is stylus. For drawing and painting apps especially, low latency stylus input can make for an incredible experience. An app that does a great job of stylus input is Concepts. They have low latency and tons of great features. Here you can see how the user is able to use the stylus to have precise control over the drawing and use some really nice brush and nudge effects. Do you want your app to have low latency stylus support too? Well, I am really happy to announce that our low latency API is now available in alpha. It has built-in configurable prediction and supports both CPU and GPU based rendering paths. Please check out the API and the demo app available in GitHub and file any issues or feature requests on the GitHub tracker. So much input, so little time. That brings us to the end of this talk and I hope I have convinced you to really think deeply about input when designing your app. For more information on all of these topics and others such as supporting large screens, getting your app running well on Chrome OS, building web apps, optimizing game performance, developing on a Chromebook and more, check out chromeos.dev. With that, please enjoy the rest of IO and I hope to see you again really soon.