 So good morning everyone, so welcome to Eyes are the windows to soul. So today we'll be sharing how eye tracking technology actually helps with user testing, even when you're in an agile environment. So I'll start off with an explanation of how eye tracking actually works. And then Limon from GoBeck, which is actually the senior VP of IT and Development there, who's actually also one of our business partners. We're going to elaborate further on the case study of how he actually used eye tracking to generate insights for them and how they used it for their own solutions as well. So how many of you here have actually heard of eye tracking before? Wow. Oh that's a lot. Have you used eye tracking before? Okay. So just as an introduction, so basically objective experience is actually a customer and user experience consultancy. And we actually started first in Sydney back in 2007 and we expanded into Singapore in 2013. So today we are actually the leader in using eye tracking technologies, especially here in Australia and in Singapore, from using it to actually understand the solution space in let's say ethnographic style and cellular contextual interviews to actually using eye tracking for iterations of your websites and user testing and user testing. And basically uncover insights to optimize the experience that your users actually have with your product or brand. So going into the user ability testing sort of formats that we have. So there is the full user ability testing and there's agile. So very often many agile development teams actually skip real live user testing because they have a perception that it takes a lot of time and a lot of effort to do. And there are actually many user testing alternatives that actually helps with this. Some would be unmoderated remote testing or even you can have gorilla testing. However, with these methods, the physical interaction with real users actually gets lost. And the product development team, the designers can also get lost in translation with interpreting what users are actually trying to say in the videos that you actually get back from such unmoderated remote testing even. So very often the method used for remote testing is actually also the think-aloud process which actually also interferes with the actual task completion times which I will go into a bit later. And in objective, full inlet visibility testing for an evaluative or benchmarking purpose can be used and it requires many participants, usually up to 12, depending on how many user groups that you actually want to test add on. And the higher the number of participants, the more usability problems you are able to uncover. But the relationship between the sample size and the number of usability problems that is being discovered is actually an exponential one. So one of the early usability gurus, Jacob Nielsen, actually suggested that even with only five users, about 85% of usability problems actually can be found. And for those things that user research is too costly and elaborate, a small and agile user research method with more frequent testings actually would suit better as many as the budget allows. And with agile, you want to very quickly validate your product's core workflows on concepts and give your team a sense of where to focus next. And it is actually a repetitive process that you can schedule regularly and it's not necessary for your product to even be completed in order to test. So you can even test using eye tracking given on low fidelity prototypes. We have actually done eye tracking on paper prototypes even, just wire frames alone. And that actually also helps. So at the end of the day, some testing is better than none at all. So how exactly does agile user testing work? So here's basically an overview of it. So there are three stages and the preparation for it, the kickoff actually takes about three to five days. And this can happen simultaneously as your design and development goes along. And that includes identifying any user needs or pain points that's to be addressed for the test, defining the research objectives and the tasks, and then recruiting the actual users. And next would be the actual testing. And this is only a single day itself. So you test five people in a single day and there will also be a workshop that will occur at the end of it or even in between sessions to actually identify any key usability issues to fix and to actually brainstorm on the solutions right down the spot within a single day. And then if it is necessary, then the next day report can be written to document down those observations. But within your cycle itself, just basics, testing alone itself is actually sufficient enough for you to go back and draw in what you think about whether your products, whether of your prototypes, concepts and products, pain points to fix. So going to a little bit more detail during the preparation stage. So planning and communication are actually the keys to conducting a great agile user research. So early strategizing occurring at the previous development cycle actually also helps. And all of this information and ideas in the early planning phase should be communicated frequently to the user research team or if you only have one researcher and that researcher themselves so that any issues can be ironed out quickly and for results management to occur efficiently. And the difference between agile and full user research is that there will actually be less tasks to cover. So instead of maybe covering five user flows, you prioritize three main tasks to look at. So we break that journey down and prioritize all the main tasks to look at so that you look at small things at one go. And the selection of tasks will actually depend on the project team's objectives as well as any identified user needs or pain points from any other analytics, let's say book of analytics. All of this is actually consolidated into a discussion guide or test script that really helps structure the actual testing the next day. So the identification of target users. This can also come from previous research that we have done before to formulate, let's say your personas. Or it can even come from your marketing department even or even your site analytics too if it actually has demographic breakdowns. And then we take you go out to recruit these people. So there are different ways to recruit them that you can do it intercept if the product that you're developing is general enough. You can go out and just grab people off the street or buy them a cup of Starbucks and sit with them at Starbucks to do it. And even with eye tracking, the technology is so compact that you can actually bring it around anywhere. And an objective personally, we already have a database of people we recruit from. So and if there isn't enough that matches the profile from beginning database, we will head out to actually service these people. So there is always a rating group of people that you can actually test with. When it comes to testing, when all the preparation is done, the testing is next. Typically it's always an hour long and the moderator or the researcher will actually elicit insights from the tested users. So it is actually mandated within Objective that the product design and development team members have to sit in and observe all the testing sessions. Why do we do this? Why is it compulsory for them to sit in? That's because we believe that the project team members are the experts as well and they can then immediately get a sense of what the users actually need and iterate on the spot all the next day even. And how do they get immediate insights then? That's where eye tracking comes into play. It is actually used as a way to gain direct insight into how the product is being used and to see what users actually struggle with. Eye tracking allows observers of the testing to see the user's unconscious behaviour in real time and that's the line viewing. It enables stakeholders to actually make certain decisions about solutions to interface problems. So some of you may have used eye tracking before. Most of you have not. But there is always that misconception that eye tracking is all about heat max. It's actually not. There is a way to quality-tweet use eye tracking to actually get the user insights. So just very briefly, in the past eye tracking hardware so this is actually how all these eye tracking hardware has been developed for and you can see that it's very intrusive, it's very complex and complicated. Most of them are actually head mounted and there are so many components sticking in and outside of their face. But right now, it's so mobile and lightweight you barely even see the eye tracking in these pictures and it can be easily to travel around with as well. So let's consider how we actually process information. So that's the conscious level where most conventional UX user experience research methods actually rely on like self-importing during an interview or survey which relies on human recall and that's subject to a lot of bias. So like social desirability bias that people actually tell you what they think that you actually want to hear instead of their actual behavior. So even if you think about the user actually has consciously think and phrase their sentences to tell you what they know or don't know. So with eye tracking, it packs into these subconscious level and gives valuable insights into how people view, react and interpret stimuli. And visual behavior is actually not something that can be easily controlled even when people know that their eyes are being tracked. So the eyes actually move very very fast and your eyes actually move between 20 millis against 200 millis against all the time. And once your eye moves, it actually cannot be altered and built. The brain does not utilize continuous feedback and when there is continuous tracking of error, sorry, when there is continuous tracking of error but rather responds to the eyes drips away from the target internally in order to return the gaze to the target. So these are the usual usability testing methods. So there's the think-aloud protocol. There's a retrospective think-aloud where you just replay the video of what the person did and then there's retrospective think-aloud with eye tracking where you replay the video back to the user to get feedback and with the eye tracking data, they can actually tell you exactly what they were doing at that point in time. So the normal think-aloud protocol has the user being tested in game. What am I doing? How do I explain it on top of trying to complete the task given? And that interferes with their past completion time as I mentioned before. You actually don't exactly get a real sense of what they can actually do in an actual situation. With retrospective think-aloud, they will always be trying to recall what they are doing just now. However, with eye tracking, they can immediately tell you, I looked at this during the test because of... So with eye tracking, the moderator or the researcher can just flow with the actual behavior of the user and the user can actually just explain why they looked at what they did for each task. So when it comes to eye tracking, the setup is as such. This is for mobile device testing. So the eye tracker is actually being placed at the bottom and then there will be actually a camera at the top that actually captures an image of the phone and the eye tracking data that is gathered from the tracker, you know, is actually mapped onto an image that's captured from the top camera and this can be viewed in real time. So what exactly do you actually see? This is a video of what we did for Go back. So you see the orange circles, that's the fixations. As the circle grows, it means the person actually looked at it a lot more. So let's say you were testing whether the person actually looked at this area which is the edit details and filter button. You can totally see that they were focusing there at all. And then even right from the video with the eye tracking, you actually notice that the difference between the compare buttons, between the takes, whether it's selected or not, is actually also very obvious, which was something that actually got that change. And for a website testing setup, this is how typically it will look like. So the test user is there the researcher is here. This itself is just the eye tracker alone. And this would be how a website eye tracking video would actually look like. So this person is actually just comparing. And you notice that her eyes are actually purely looking at the prices and not really looking at anywhere else. You know, immediately right from the get go, what actually attracted their attention. You know how I mentioned before that it's actually compulsory that every project team member, especially the designers and the developers have to sit in to observe the testing sessions. So it's actually for this purpose that either in between sessions or at the end of all these sessions, the researcher or we call it the facilitator activity will actually sit together with the project team to discuss on the spot what was seen during the testing, digest what the users actually commented about and then creatively think of solutions. So this actual line is the results for the next sprint. And this discussion is also, you can consider it like a debrief and it can also be conducted like a workshop. And in Singapore we have facilitation cards or conversation cards as well as the capitalist world, basically a wall that actually lists out questions to think about to help on the forefront of every project team member's minds even as all the testing sessions go on. And this just basically to spot the range users to flow for solutions creation. So the solutions should not come from a single person alone. So it's not just a researcher's job to come out of the solutions. It's the whole team continuously together coming out of the solutions. And I guess right from this morning's keynote there was one thing that we struck was collective wisdom, outweighs individual insight. And this is how this workshop at the end of all testings can actually work for that. And whether it's required or not, if it's required it can be documented down so that it actually is just probably just a list of summarized findings so that the team can actually bring it with them and even have kind of like a record for what they have done. And so you can trace through a timeline of what's being done. And basically always we go back, when we go back, we actually have a separate meeting after to actually fully digest all any recommendations that's being given by the researcher as well as the team. And then we can actually give them part of that so we should take further on to maybe pass. So as Lynette said, my name is Yvonne. I work at GoBear. It's a fintech based in Singapore. We have rolled out in several countries in the region and every step of the way Lynette and her team has been supporting us to make sure that we're ready for launching in all the new markets because there's also going to be cultural differences as maybe also legal differences. And some of the key findings I'd like to share with you, which we actually changed based on the eye tracking research. Some things we actually changed on the spot. So for example, we were in Malaysia a couple of months ago and we saw that a particular word did not kill to users and we just directly during the sessions changed the wording and saw an immediate effect for the rest of the day that people actually started noticing. So things are very, very small when it comes to wording or maybe even a color and those things you can change on the spot during the testing day. Other things are a little bigger so we also want to go ahead and IV test. So to us, the whole testing and the whole eye tracking is a research phase and only when you put those findings and recommendations to use will it actually be valuable or not? You don't know. So what we do is we select the findings based on the research and then prioritize when it's a no-brainer and we know it's going to be a lot better than we just go ahead and do it. If we're not sure or there might be multiple options we'll define AB test. So we'll take that recommendation and come up with two or three options and then AB tested first before we actually proceed to execution. How many have you ever heard of GoBear in the first place? So just some brief introduction because you guys just saw some videos. It's a comparison website. It's a meta-search engine where consumers can actually compare financial products like insurance plans and banking products. So for us it's a double challenge because meta-search is quite new in Asia so for consumers to actually understand that it's a website where they can just compare but not buy anything. That's something that we also need to explain and it's convenient for them to be able to use it anywhere anytime but it is a new kind of product. So we tried to separate the introduction of a new product or service compared to the actual usability. So one of the things that we've noticed is that when people actually come on to the whole page they would actually look at the grey dot area so that's the form fill that actually attracts their attention which is fine if they're in this case looking for travel insurance comparison but we also want to tell them that this is not just travel insurance that we're very compared, we also have other products but what happens is they never saw the other tabs or never maybe that's a too strong word but in a lot of cases they didn't see that we also have other products for them to compare. So changing the wording or changing the color could have been an option but we did it a little bit more drastically in this case and we said what if we just leave it as it is but we add a top menu where we can actually group the different products that we need, insurance or credit cards or loans for example and what we saw in the A.B. test where we did it with icons or just wording which who speaks Thai? Okay sorry so this is a Thai example where we actually tried it with just words or just icons or a combination of both and what we noticed is that not the click-ins would actually improve or the conversion which obviously is our most important KPI but what we did see is people actually clicking through to other products which was our hypothesis. So we implemented this in Thailand first and it's running now it's been running for about a week and a half and we can actually see that people are engaging more with other products on the GoBear platform. So thanks to the insights that we got from the usability test we were actually able to solve this problem. A second issue on the results page is that on the desktop version you needed a mouse over to be able to see the more details button but what happens is a lot of people actually use the mouse in conjunction with their eyes so they use the mouse as a kind of a pointer and what we noticed is that it also interrupts the fact that they're looking at something and then there's a layover that just blocks the view of whatever they were looking at but we do want them to look at more details because that's where all the coverages are actually written out and we have policy details. So we thought how can we solve this without changing too much on the results page. So we came up with a design where there's actually again in Thailand where there's two buttons. So one is to actually go to that provider and the other one is to go to more details. The conversion dropped and that was to be expected because a lot of users actually clicked on that button before thinking that they would see more information about that specific product. So obviously internally we had quite a discussion when we launched this and we saw the conversion of like did we do the right thing. Sometimes you also need to take a little bit of a drop down to be able to grow again after because this does make sense for the end user. My business development colleagues weren't very happy but it does make sense for the user because they're actually getting what they're expecting rather than being sent out to the provider where they're not seeing more information on the product. So what we've done now, which has actually just gone live as we speak this morning is that the button, the blue button which is more details, we've actually made it a little less prominent. So we're doing an AB test now where we make it a white button with just gray letters or just a blue outline. Just to see if we can make it a little less obvious for people to click on which has already brought us some good results. So these, again, are two examples of things that we saw just based on eye tracking. If you would ask people what they would see here then they would actually see, oh, I see, yeah, that you have different products but that's the difference between the conscious and the subconscious and the unconscious. If you ask them, did you see there were other products? They probably would have seen it but they didn't notice it. So that's where we actually have to go into the more detail and do a little bit of AB test to see the actual results. The thing that Lynette just mentioned as well is that the whole team has to be present during the test and that is something that is really valuable to us because the product owner and the designer that are actually witnessing the test it brings them a lot closer to the end user. Any given day they'll be sitting behind their desk creating great stuff but they never know what the response is until we do another test. So when they're actually there witnessing this test live in a different room and even in a different country sometimes they're able to connect with the researcher or the consultant that's sitting down with these end users and asking them a specific question like, hey, I noticed that they're looking at this and this. Can you ask this follow up question? So it's extremely valuable not just for the outcome but also for the engagement of everybody within our teams to connect with the end user. If any of you are on the fence whether or not you should do it try it at least because it will bring a lot of valuable insights and engagement for your whole team. So with that we want to have a little bit more time on QA because we've been talking straight for about 28 minutes now and we'd love to have some of your thoughts or questions. You're going to have a question for Lynette or for me? Sure device. Is that device? I don't think you brought the device. I don't have the device with me. But they're not wearing anything. Testing and for website testing? They are just basically astronomical. The user is astronomical. The eye tracker is actually this black bar here that sits at the, that's attached to the test screen and that's the actual eye tracker. So if you're thinking about wearing something there is another eye tracker called glasses but that is mainly used for eye tracking in a real world environment. So when you're tracking in let's say a shopping mall or medication train station or something like that. For the mobile it's also just like this. This then is actually just there to help prop the to help prop the tablet or the mobile device so that it sits at an incline and then it actually holds a camera at the top. Then the eye tracker which is also that same that small piece of black bar actually sits at the bottom so the user doesn't wear anything. What kind of device does it provide? Is it just the eye tracker or the... It does have people dilation data. So that one is more for raw data export. It does provide people dilation data. It also provides the actual eye tracking videos like this. So you know where the eyes actually travel to fixations. Other outputs being like heat maps, gaze plots. So gaze plots is where you actually know from one spot to the other how exactly the eye actually travel from point 1 to point 2, 3, 4, 5. And then other outputs will actually also be actual study states. So for example, how long did they actually look at a particular button for example, or even the number of fixations. How many times did they actually look at it? We hold our webcams on our devices. I wonder when that technology will be on our mobile phones, tablets? It is actually coming soon. The eye tracker company called Tobi Pro, they are on Tobi. They are actually experimenting now with MSI and Dell laptops to actually incorporate eye trackers into those laptops. Do you plan to have it on your mobile application? I know Tobi, for example, has a mobile application that provides directions that get to it from real users for example, and get anything. It's being thought of but at this point of time it's a little bit hard to execute Modern eye trackers now, they actually use something called the People-Colonial Reflection, People-Center-Colonial Reflection, PCCR. So what it does is actually each of these eye trackers actually emits, has infrared illuminators. So what it does is that it actually shoots out infrared light and that actually reflects off a person's people and the quantum. And that reflection on the eye is actually tracked back into the software that emits it, and there's an algorithm that actually Tobi uses to calculate exactly and to map out where the gaze points are X and Y coordinates. Do you have a statistics of false positives? Okay, statistics for it, not exactly, but there's always calibration process that actually happens before the actual testing is being done. So that calibration process actually calibrates, because everybody's eyes are different. So that calibration process actually maps out how accurate the eye will be and then the algorithm within the software actually adjusts for it and so the actual output on the videos and things like that will actually be accurate. As for how inaccurate it can be, usually there's sometimes about one to two degrees difference. Okay, so what it does is it calculates how accurate the eye will be about one to two degrees difference. If I want to just start doing my training for my project on this project, how much will it cost? Okay, there are... If you want to buy the equipment for the website, for the platform, for then that's the X230, that itself, the hardware alone itself costs 7,000. For the software, the Tobi Studio that comes along with it, that's another 19,000 Singapore dollars. So yes, the software costs more than the hardware. At Objective, we also rat this out and so you can get the entire set for about, I think it is 500 a day or 2,500 for a month. If not, we actually also offer consultancy services like how we work with GoBear to actually work partner together to actually conduct the eye-dragging sessions. I'm curious, do you work together on the two days of our research? What's great about this? Right now because GoBear is launching a lot of countries, right now our emphasis is on doing the tests after soft launch so that we can iron out any issues that we have before we go to commercial launch. So for each new country, that's the rhythm that we're working together. And for Singapore, which is the eldest country we launched last year, that's what we'll repeat once every three to six months or when we have a new product. And that's what we have to do a specific test, for example. For next year, when we have the countries established for at least six to 12 months, we'll repeat as we see fit. So we have two-week sprints. So that means we change our website every two weeks. So it wouldn't make sense for us to do that every week because sometimes it's just smaller sprints. But when there's a big change in the UI, when there's a new product, then definitely we would engage. If we're talking about two to three years down the line when we're established thoroughly when all the products that we want in are in, then you would probably have to repeat every quarter at least. Well, that's my personal opinion. When you change your product or your service. But the thing that Annette also mentioned, if you want to integrate the simplified testing into your sprints, I would recommend doing just the day in your spring. So if your sprint starts on Monday, your designers are finished with the first designs prototype that's going to be on Tuesday. Run the test on Wednesday. Have the team witness that. They can already iterate the design. Have them ready by Thursday or Friday on the following week. That's doable. You have to be very quick, but it's doable. If you just limit the tasks or you actually limit the things that you want tested, then it would be very possible to do it every single two weeks. We're not there yet. We're not that fast yet. That'd be great to do that fast. Two weeks? Yeah, we're having somebody in the team. Not to my knowledge. Annette, do you know anyone who does it in every sprint? There is always that perception that life isn't the thing I have to manage and take so much effort to do it. But actually, right now, we are actually creating agile user testing such that you can actually fit within your cycle. So the consultant or the researcher basically has to be somewhat embedded within your own project team. So they are actually on the ball every time something new happens and then keeping that information, it has to be the E just right out of the spot on the test script within an hour and then you can just test it in the next team. You mentioned about some culturalistic preferences that we've used for the type of science and the evaluation website. What are the things normally that you do differently that you do on a type, version of that site and evaluation? The cultural differences you make? Which you observed while using this technology? In the end, at the beginning, you'll think, all right, the Thai consumer is not a Malay consumer. The Malay consumer is not a Vietnamese consumer and that is very true. They're all different cultures. But if I look at the statistics, after two months of prentabene life, we're all human. That's really a fact. So whether or not they use a mobile device or if they use a desktop, what time they serve, what time they actually take out, how much time they spend, really it's within a 10% margin across the region. The cultural differences that come in are noticeable as such. In Hong Kong, for example, when we did our last testing, they're more advanced online users. So they're faster and they're more critical users. Whereas in the Philippines, for example, they're more easygoing. Their general infrastructure is not very good, not very stable. Each loads in 4 or 5 seconds. They're okay because all pages load in 4 or 5 seconds. In Hong Kong, not acceptable because everything is fast. So why would this be slower than that? Wording does make a difference. So the functionality, not as much. But in Thailand, the language itself is very poetic. So very direct wording doesn't work very well with the Thai consumer in general. So it has to be a little bit more explanatory. It has to be a little bit softer. A little bit more polite even. Whereas in the U.S. or Europe, it can be very direct, very short sentences. So those things you would notice. But in the end, yes, we are all human. We act pretty much the same. I think we have time for one more question. Is that right, timing-wise? Okay, all right. We have a couple of questions. Have you done any baby testing on the improvement you've made before and after on the live environment or on the live site? So after we've done the change, then we actually compare the numbers. So there's no longer an A-B test after we actually put it on to production. If we do an A-B test on production, the original and maybe one, two or three options, and that's the actual A-B test. After we've chosen one of the options and put that in production, then we can only look at the historical numbers compared to the current numbers. And we also do that. We measure absolutely everything. So we always look at the before and after effects. But preferably, we only make a choice during the A-B test and only when we're sure we'll implement it. Yes, so it's not beautiful. So there can be another way to do it. I think it also relates to the question the gentleman asked before, the false positives. Looking at something for a while will grow the mark, but it doesn't mean that that's interesting. It could also be that they're, you know, thinking of groceries at that point, or ordering at the screen. So it's not 100% accurate, but it is 95% accurate. The eye-tracking actually tells you something about the current design. It doesn't tell you whether it's good or not, and it doesn't tell you whether there's better options or not. But it does give you an insight onto what you have today. It's up to the design team and the UX team to really come up with better options based on those insights and statistics. It will not tell you whether it should be green or blue or round or square. It's just going to tell you whether or not they notice it. I think one of the biggest things that we always put up as objective is will they use it? If there's a new feature, will they use it? Which means will they notice it and will they actually engage with it? That's all you can really tell. You can't really see if it's the best option. You can just get insight from what it is today. Yes, the eye-tracking actually tells you what the person is actually looking at, what the visual behavior is. If you're asking about... Which is one of the reasons why we actually... I mentioned for the retrospective thing about with eye-tracking. So what we do is that we actually play back the video with the eye-tracking data back to the user, to the S asker, why were you looking at this? Why were you looking at that? Why did you behave in such a way? Why did you click on this button that's where you get even deeper insights into the reasons behind the eye-tracking video. Last question. Thank you everyone for your questions and your time and I hope it brings you insights as well. And that's a wrap for us. Thanks very much.