 Hello. So I'm Ramak. I'm a PhD student in computer science at Georgia Tech. And I'm going to talk to you about some of the work that I've done the past few years, which is basically tying information visualization and touch base interfaces. So right off the bat, there's actually three different scales we can consider or look at when we considering the space. The first is mobile, second is tablets, and the third is large touch screens. To be more specific, mobile is something we say it's less than seven inches screen size. Tablets are between seven and 10. And anything greater than 10, we we're calling it like large screen sizes. All of them follow very different design paradigms, not just because of the difference in size, but because also because of the way they're used in different contexts. But InfoVis fits in some of the other way in each of these. So this, for instance, was one of our explorations for bringing interactive. Oh, wait, so okay, I can't see it. All right. So this is one of the explorations for bringing interactive bar charts to mobile devices. The interaction here supports things like realigning the stacked bar chart, but also things like zooming into the view and panning the bars. On the other end of the spectrum, we have a standard InfoVis technique of dust magnet up here working on a large touch display. System supports collaboration where multiple people can be on the same view together and be interacting with the system. We're not the first people to have done information visualization on large touch tables. This is a tool called Cambiera, which does document visualization, also focusing on collaborative aspects and the author of the tool is actually sitting right here, Daniel. He's going to be talking to you guys tomorrow. Hopefully not about this. So having built on all of these different platforms and understood the constraints and challenges, we expectedly found them to be very different. But the focus of this talk is specifically on the challenges that we had on the tablets. So tablets have come a long way in the past couple of years. Even two years ago, you could say the resolution and the processing power could not match those of PCs, but that's not really the case today. They've progressed tremendously. And that was one of the reasons we started exploring the space of bringing visualization to tablets. I work with my advisor John Stasko and a few of our other collaborators in doing so. And this is one of the first explorations we did. So this is a scatterblot visualization technique running on an iPad. The video is going to demonstrate to you the different techniques we adopted. It will run about two minutes, but I'll give you the context of all the different features so that we can talk about them subsequently. Can we have the volume? Yeah. So lots going on in there. And before I delve deeper into the design guideline, design considerations or the decisions that went into it, I would just want to take a quick detour. So Richard Buchanan, who started the CMU School of Design, wrote an essay on the design in the digital age. And he basically articulated three properties of good design, usefulness, usability, and desirability. And I've often used these principles in my own work. And I'd like to use these to contextualize the conversation for the rest of the stock, starting with usefulness. So what is usefulness? It's a product's clarity of its own content and purpose. The goodness of a product or the usefulness of a product is evaluated by finding answers to four questions. Who is going to use the system? Why are they going to use the system? When and where are they going to use the system? And how are they going to get there? And these questions have been increasingly brought about in the conversation for visualizations to kind of understand the context of the application. But why is this important? Well, once you know who the answers to these four questions, you can answer the bigger question. That is, what purpose does the application serve for the user? Because once the purpose is identified, then we can look at the different tasks or operation that that application needs to support. For instance, when we were building bar charts, sorry, scatter plots, we looked at all the different visualization systems that provided interactive scatter plots. And these are different features that all of them had. This is a group list of about 35 different features. But then when we went back to thinking about who the user is and why might they be using a scatter plot system on a tablet, we pruned this down to about 30 in different operations that were central to our work that we wanted to support in our system. So the next step is the usability, where it's the ease of use or how you're designing the ease of use. And once we identify appropriate interactions from the previous step, the question is how do you support, sorry, appropriate operations from the previous step? How do we support them with the different interactions? Often we use interaction design principles to impart usability to a tool. But the domain of information visualization in our experience was different in that a lot of these principles, some of them actually got amplified over the others, which was different. And I'm going to actually talk to about some of those, which were guessability, learnability, affordances, ergonomics, and exceptions. Well, given how complex they sound, you would almost wish that there was some sort of an abbreviation to it. There is, for people who are scrubs fans, but let's now actually try and discuss this individually. So guessability. So what is guessability? Well, it's exactly what you would guess. It's the idea that if you're building interactions, you should start from the ones that are familiar to the users. And what better way to find these options than to actually go to the users directly. So some of the early work in the guessability studies started where the users would be presented with the starting position of an operation and ending position, for instance, sorting or selection, where you would see all the data points and only one of them would be selected. And the questions the user would be asked is how would you perform this operation? Doing it over multiple users actually generated results such as these. So this is results from the user responses for a study, from a study on table tops in particular. The results actually here show that the most common gestures for cutting an object on a large table is to actually slash on it directly. And to paste an object is to actually bring it from outside. The problem with using these responses is that people tend to only bring out the options that they've previously seen or used. So we rarely see novel approaches to an interaction or an operation. So for instance, in the recent few years, we've seen different gestures such as pull to refresh or sliding panel layers or swipe to reveal being very popular. Almost every new apps kind of using it now. But it's also fair to assume that before these gestures were commonplace, a guessability study by going to the user actually bring these out as options. Another issue here is that users are very inconsistent about their responses. So if you they would use the same gesture for different things across along the same single study. So if this doesn't result in rich enough responses, then the other options we actually go back to existing systems and UI guidelines and see if those can help us. For instance, for our work, two systems were very relevant. One of them is TouchWare, which was Dominic's bar and colleagues, which uses multi-touch interactions for manipulating stack graphs. Dominic has actually talked about a similar topic two years ago and he's going to talk about a different topic tomorrow. Another system that was relevant was Kinetica, which is a system that uses physics-based interactions with objects, where data points are actually represented as particles that attract and repel each other based on certain properties. These two systems were very relevant, but the problem was that they either use a technique which was not what we were addressing like in the first case or they had an entirely different purpose or the way they defined their purpose was entirely different than the way we defined our purpose. So coming back, we then realized that we need to do an internal guessability approach. We need to guess our own interactions. So for instance, for different different types of operations that we supported, one of which was Zoom, we just created a list of different gestures that we think were appropriate for this operation. We went ahead and then we implemented all of these different gestures and then we just chose the one that seemed to work best. I think at the end of the day you can build off of principles but you're your best user in some sense. So a different part to the same conversation is about learnability of gestures and that's kind of easier to talk about in three parts. Complexity, discoverability and the expected action. So complexity of a gesture that you might use in the system depends on four things. It's the duration, how long it takes a user to perform an action, the distance the user needs to travel to perform an action, the number of fingers that they're using for the action and the number of taps required. I mean in no order the more these values increase the more complex the gesture gets for the user to perform. And an example I'll give you that we did use in our system was that of tap and pan. This is a gesture that's kind of commonplace at least in the Android OS. Here's somebody using this gesture to zoom in in Google Maps using just one finger. You tap once and then you tap again but without lifting your finger you just drag it around. We actually use the same gesture in our system to select a range of values on the x-axis. The user taps once and without lifting the second time drags it around. The response that we got was from people was bad. People just completely hated this gesture. They had a very difficult time performing the gesture right out of the bat. I mean we were giving them enough options to try it. They weren't familiar with it so they weren't using it. Once they did get around to learning it every next time they would themselves try and perform it they would simply go back to the basic pan gesture instead of like tapping and then panning. So that was a good thing to learn in the subsequent version we actually did not use this gesture. We just went back to basic panning gesture. Another common conversation people tend to have is discoverability. Like the gestures or the different interactions that you use in your system should be discoverable. Well within the context of information visualization we realize that this is not paramount and it's actually kind of going away from the guidelines and that's usually proposed. Two reasons for this. The first is that tools that you use for visualization on desktop such as Tableau and Spotify cannot really be called as walk-up and use systems. I mean you can't expect a user to just not know how to use them go on them and be able to perform all the sets of tasks and operations. This is not really to do with the usability of the underlying tool it's more something to do with the domain of information visualization which is feature rich and very complex. So we believe that when we're building a tool for tablets it kind of faces similar kind of conversation pieces where there's enough and more features in that tablet system that you don't expect the user to find out the interactions for all these all those features themselves. And another reason on the same point is that even consumer applications for instance here mailbox which uses gestures even they have to spend time onboarding the users on mobile interfaces and we believe we have to do the same even in our system. So overall when we were when we were designing sort of the guidelines of our system we thought discoverability was not key. It was not key to design for discoverability in mind. A third and the final step in learnability is the expected action. There's some gestures that are aligned with users expectation and in those cases the learning curve is minimal. Some of the examples of these gestures are tapping double tapping panning and pinching. Users are familiar with these gestures from having used it on other applications. But what it does for your system is that the user expects these gestures to do something. So the rule that we always follow is that have some sort of a response in the system even if it doesn't change the state of the system make sure the user realizes that the system has understood or accepted that gesture. For instance in our system we we use double tap gesture extensively. All the double tap gesture is used forever to just zoom out of the view irrespective of the type of chart that we might be building. Another gesture that we saw people increasingly try was using the menu reveal on swipes. So on the edges of the menu they expected to swipe in and some sort of okay let's try it again to have some sort of a menu pop in. In this case we have filters menu coming in from the right. The third piece I'm quickly going to cover is affordance which are the cues for guiding interaction. Very well established concept in HCI. How we use this in our system. So I was showing you a feature where the user could pan on the small rectangle on the right to preview the different attributes in the in the data. So how does the user know how to pan in that area. So the solution we used was we added a texture on that particular region which made it feel like it was draggable. We expected that the user could see the distinct color and in some sense make a connection that this gesture this up this area does something different and every time they would open the panel the color and the texture would stand out as both a reminder and a prompt to the user. Another example in similar lines is the same drop-down menu but on the y-axis at the top left we place the preview region on the left-hand side and so the right-hand side. This was naturally affording the user to use the left-hand instead of the right-hand. So if they use the right-hand they would likely occlude the view way more than they would by using their left-hand. Ergonomics also plays some important concerns in the way you go about designing these systems. And ergonomics essentially can be designed as the efficiency in interacting with the system. So this is how we expected this user to use the system. They have a left hand that's holding the tablet. They have a right hand that's interacting with the system. So at no point do they have both the hands available to use. I mean it's a constraint that we put on ourselves which effectively decided that we don't support multi-hand gestures on the tablet. And deciding between using one hand and two hands versus one finger or two finger gestures actually has a direct effect on how much the view is occluded or the interactions the user performing occludes the underlying visualization. For instance here's how Apple describes the pinch-to-zoom interaction. The location is not important as long as the location is on the view of the video or the image the view zooms in. Here's how we're supporting pinch operation in one way which is on the axis. The user to scale the axis user has to pinch directly on the axis. Similarly to user to select a range on the axis user has to drag directly on the axis. So what it does here is that in both these situations you see that while the user is pinching or dragging they're actually occluding the value that they're looking at. For instance on the right image I don't know if you can see clearly user wants to select the value of 160 but the number 160 is right below his finger so you can't even see the value 160. So occlusion like this kind of occlusion aspects of what we found very particular very specific to data visualization systems. For instance in this case we solve for this by providing those two numbers at the top so that while the user is actually performing panning they see the live feedback of what data values are being selected. Finally exceptions so there's the concept of conflicts versus consistency. As you support multiple types of charts consistency across those charts in terms of what operations what gestures are used for what operations becomes very important. It's very important for familiarity it's very important for ease of learning. However maintaining this consistency often leads to a lot of design conflicts and I'm going to bring one example out of bar charts. So this is again one of the systems I built with Danielle about couple of two three years ago now. Here this is an operation that does panning to sort the values so you just pan on the axis vertically and it sorts the value that bars base in a particular value other than the x of the y axis. In the system that I showed you previously we're using this gesture for something else we're using this gesture to actually select a range of value in the x-axis. So this is scatterplot it uses panning to select a range of values on the x-axis or the y-axis. So if now the you if we brought in a bar chart into the same mix same tool what do we expect the you the system to do the user pans on the y-axis. Well to maintain consistency it should now not sort it should actually select the bars and that's what it does. But then the problem that we have here is that sorting would require some other gesture. Sorting was optimal by just panning on the y-axis because it decided which axis to sort on and it told us which direction to sort on so it was very useful in that sense. But now it needs another gesture that is suboptimal just because this one gesture has been used in a different context across the whole application. So these kind of issues keep coming up when you're expanding your system and then in many cases you actually have to go back and change some of the designs that you previously chosen. Some design choices you previously made. In this case we actually changed the gesture to hold plus pan. So user first holds on the axis which activates the sorting state and then pans to sort in one or the other direction. So this might look like a lot of process understandably. I'd like to give you a few examples of how these are fed back into the design decisions that we took. So these were the list of different options I showed you for the zooming interaction. So how do they how do they work? So the fixed aspect ratio zoom is as user expects. As a user pinches on the view the whole view zooms out maintaining the scales of both axes consistently. In the same term of flexible axis zoom would zoom the view based on the actual motion of the user the distance between the user's fingers in x and the y axis separately. So they're basically independent on the two axes. But the motion is simultaneous. The zooming is simultaneous on both axes. When we build this gesture it was too complex for users to try. They would never they would pinch on the view and they would just not be able to get the kind of configuration they would need it because if we pinching on images or videos we're not used to pinching in a specific direction we just pinch we expect because the scaling happens on the both axes simultaneously. So then we went back to the third option which was an axis based zoom where if the user wants to pinch and zoom on the x-axis they pinch on the x-axis. They will assume on the y-axis they pinch on the y-axis. So you're basically separating the interactions for each axis. But now the problem was that the interactions weren't happening simultaneously for the two axes. Another option was select plus zoom where different to the pinch gesture the user first selects the region that they're interested in and then they just simply double tap on the region and then it pinches and zooms into that particular view. Another common operation is automatic zoom is what we call automatic zoom. You just basically double tap on the view and it automatically finds out the best zoom state so that it minimizing the occlusion basically. And finally zoom lengths where which is again one of the features I showed you pinch on the view it shows up a lens it's fun it's interactive people don't use it. Finally desirability and so if you've done with utility the usefulness the usability sort of comes back to the desirability the last element of adding grace and elegance to your system and making and making some interactions fun interesting and even quirky actually peaks user's interests. User's interests can be tickled in that sense. This has a direct in fact effect on the engagement of the user and there's no real formula to achieving it you have both the visual side to play with as well as the interactive side to play with but some of my favorite application actually showcase this behavior one of those examples is paper by 53. This provides an array of highly engaging interactions while successfully keeping the complexity of those interactions hidden. Another example is an app that actually the company that no longer exists it's got planetary again highly appealing visuals there are responses responsive to user actions in real time and I was thinking that adding such a piece would not be difficult. Here's actually Robert Hodgin who the guy who helped the planetary talking about how simple it is. So this particular particle emitter will be will use a graphic that is hard to see it's like a crescent glow the other particle emitter will use something more like us a smoke sprite so you combine the glow and the smoke and the sphere and the coronal ring and the texture and you've got a fairly nice looking star form. You've got a fairly nice looking star for me it's just that simple. I wish we could use this as a scatter blood application to be honest but it's not that simple. Coming back to earth we in some of the visualization applications the one that we've seen is touch wave equally engaging and not least because the author is actually sitting in the audience it's actually engaging. Some of the some of the features that we saw we actually try to incorporate in our own system one of them was zoom lens obviously I gave you an example. People found it to be very interactive a lot of fun but as soon as you put them in the context of a task they were just not using it like I said. Another example there is lasso. Lasso is very supportive of sort of drawing interesting funny different kinds of operations. This is actually my girlfriend using it. I spent months building the tool and I go to her and all she does is draw fancy artwork. So that helps. Finally trying to wrap up. I like to give you just two basic takeaways. So I gave attempted to give you a broad overview of the touch plus visualization space with a set of criterias for designing the space. I also tried to discuss this within the framework of usefulness usability and desirability which if followed in order actually really help. I mean if you tackle what's useful for what how the system is useful first then tackle the usability and then finally the desirability really helps your work. Hope you liked it. Thank you.