 Yeah, that's correct. It doesn't go to hours, but I try to get it challenging so it gets longer as I just try to see what I have to say so he keeps responding. No, no, no, no, no. I never tried. It's interesting to have like pets, you know, do neuro marketing on the pets and then have them pass it on to their own. Maybe not. No, maybe not. It's yesterday and so this is my way of being tired. It works for my trace. We already have that on screen. I'm going on to the service. And you make online analytics or online. Yes, you've used standard solutions and you weren't very happy with what they provided. And actually this is your first work. Yes. And your first talk. Yes. Exactly. And the title of the talk by the conversion optimization and online analytics. So please give them a round of applause again. All right. Yeah, then let's get started. First, but before we do, I want to thank everyone for still being here. I mean, it's the end of the day. And just for that reason, let's I will try to make this a bit more interactive. So I don't keep your questions till the end. You can already ask questions in between. I'm afraid I put a lot of graphs in here. So if it gets too dry, please just let me know because it's it's if you do things all day long for you. It seems so standard that you forget that they're actually that things might not be too clear. Besides that, thanks for the great organization. You said it was my it's my first work camp. It is and I love this atmosphere. Everybody's very inclusive and open for conversations and this is this is great. I like that very much. Okay, enough with the praise. Let's get right into it. All right. So online analytics. Who of you is doing online analytics? Who's using any of these tools? Yeah. Okay. That's that's already quite a few for the camera. I think there was more than 70%. Okay. So who is using which tool? I'm very curious. So Google analytics. All right. That's that's already plenty. That's and I have to be honest. I'm only using it because it's for free. What else? What do we have? Hot jar. Hot jar. One, two, three, four. Just sometimes back and forth. Okay. What else? I have to be fully honest with you when I looked this up yesterday. Maybe I should say that to whoever made this graphic. I'm sorry. I did not check the copyrights. Please. I hope it's okay that I'm using this. It's not for my own benefit. It's for sharing information. But most of these I actually didn't even know. For example, was it web trends? People knew web trends already here. Yeah. Just see head nodding. Okay. Perceptions. I didn't even know what the Google tech manager was. Vupra. Vupra. Vupra. I don't know. Piwik. Piwik. Piwik. It's now called Matomo. It's now called Matomo. Yes. Okay. It's open source Google analytics. Okay. So I've just been told that what was it called Piwik? It's now called Matomo. It's an open source Google analytics. Very good to know. Maybe I should check that out. All right. But basically there are a bunch of analytics tools out there. I think you all know that. This slide says it all. And I'm sure there are even more out there. And the golden standard for analytics or basically comparing anything is a B testing. That's at least what they say. Who of you is doing actively a B testing? One. Two. So then if I just may ask what is it that you use all the other people that raise their hand for that you use online analytics, what are you using it for if not for a B testing? Curiosity. Curiosity about a visitor. What better works? What better works? But if you want to know what better works, what do you mean? Then you do a B testing. Okay. So basically then I just use this phrase but in essence you want to know who's your visitor? Who comes? Maybe you write a blog post. Who's coming? And what works? Do these blog posts then actually lead to maybe online conversions. Okay. But a B testing that's at least the golden standard. That's what I want to talk about today a little bit is in essence just you have two versions of one page and you serve it to your visitors just one version of the two and then you see you compare which one of these two versions actually works best so that you can decide that's the one you want to keep. That's basically it. Nothing complicated. Okay. When it comes to a B testing there are a few problems that companies face. Let's start with small companies. One big problem that they face is that they don't have a lot of website visitors. Now if you don't have a lot of website visitors and you run an A B test you can't really be 100% sure that one version is actually better than the other one just because you don't have enough data. Yeah we get to that in a second. So then basing decisions on small sample sizes can be very dangerous. So let's and we get to that also in a second. But this is a problem small companies face simply because they don't have a lot of data or coming in. Plus you need full implementations of the working website to actually do A B testing. You need an entire infrastructure knowing one person is now going to see this version the other person is going to see this version. You also kind of do it in series. You can do that too but that's just going to cost you a lot of more time. Now large companies also face a big problem which is testing can actually lose them a lot of money. So let's say you do have a conversion rate of 5% and then you actually make money and then testing something else. Well what if that new test will just give you 2.5% conversion. Just running this test lost you a lot of cash. So this is something that they become very afraid of running these tests actually. And this is the part of the big companies that is important is the infrastructure is very costly. What I mean by that is let's say you have ball.com. They run A B tests. I can tell you that for sure. But what very often happens is that these A B tests are run by the marketing department implemented by the developers and then you have the customer service that has no idea that these things are going on at this moment. And then what happens is that some customer sees some version of a page where things are not working and then they call customer service and customer service is thinking well that's not what I'm seeing. So you very often have in these large companies that things just go like this. This is not necessarily a problem with testing itself. It's just well the bigger you get communication becomes more important. Plus they have the same problem. It also only works with full implementations. And if you ball.com having an exact copy of a running version of ball.com at the same time with an infrastructure around it that makes sure whatever visitor comes sees that one is even more costly if you want to do it right. That's what it's about. It's not about the practicates about if you want to do it 100% right. Okay, but now coming to your point. How much data do I actually need? And this is the critical bit. And this is why when I started out with these online analytics tools I was constantly these are nice graphs. What does it actually tell me? But nonetheless these graphs are very nice. So they keep making you come back. Let's do a quick estimate. Let's say you have a conversion rate of 5%. Yeah, that's your conversion rate. Now let's take the average improvement of an AB test, which is about also 5%. Assuming you have 200 visitors per day coming to your website. Okay, 200 visitors per day. And you only have two versions you want to test. One is your standard version, one is maybe an improved version. And 100% of your visitors are participating in this experiment. So 100 people a day, C version A. The other 100 people C version B. How long do you think you need to run this test until you have enough data in to be sure one version is better than the other? That's why I just said 5%. And that's just, I'm just saying that that is roughly the average. It can go higher or lower, but that's why we stick just with 5%. Two weeks? Three months. Three months? 200 per day, roughly. No, I know this was a lot of information. At the end of the day, so bear with me. They're going to be some nice graphs in a bit. Okay, so the short answer is you have to run this thing for 1,216 days. Which makes a total of 243,200 visitors. Which is insane. I mean, who wants to do that? But this is, as you already pointed out, this depends on a lot of factors. One, how many visitors do you get? And two, how big is that improvement that you will make? But that you don't really know in advance. That's why you're running this test. So this is very good to know while you are running an A-B test. These tools are online. You can use these tools and I really encourage you to use these tools because they can help you avoid a few of the mistakes that are going to come up. Okay, so all I'm just trying to say is be careful. These graphs look nice. I've been a victim to that as well, but they can be misleading. So to give you an example, let's give you a concrete example. This is an A-B test that has been run and this has been run for two days. Now you have a lot of software packages out there that already give you an indication to do all these calculations for you. So you don't have to think about this too much. So what it will say is, okay, I have my control version. It already gives me a conversion rate. Now you see it of 8.66%. Okay, that's already nice. Okay, and if you look here, this is the conversions per visitor. So there you have 127 visitors you had already that saw this version. Now based on this software, it says there is no way that this version can be any better than the control version. There is like no way. But if you look at the amount of data that you got in so far, maybe you should not make that call just yet. And that's the critical bit. Because if you let it run for 10 days, all of a sudden this is a winner. And this is what I mean with don't get too excited too early. Try to make a very clear prediction of how much data do I need and wait until you have it. Or set other types of criteria. And that's the most important bit. You need to be honest with yourself because I've been rationalizing things constantly. It's like, no, this just looks prettier. It's fine. But it wasn't. But here you can see if you now look at these numbers, many more visitors and the conversion numbers are also higher. Now if you were to run the same thing here through this tool that I use online, just plug these numbers in with these numbers of visitors, it would also tell you it's fine. You needed these numbers of people to be 95% sure, 95% certain that variation one is actually the winner. We can talk about these things a lot more in detail as to how long do you need to run these things. Are there variations of seasons? How does hot weather influence people when they purchase something? We can talk about all these things if you want. But this is one of the aspects that's the most critical bit. It's the number of data. Yes, please. So, okay. So the question was if I have run this test again and variation one was control and control was variation one. No, I have not. I have not. Are you hinting towards two-tail versus single-tail? No? No, I have not done that. But if you were to do that and the numbers are correct when it comes to the amount of data you get in, the result should not deviate too much when it comes to the confidence interval. And that's the critical bit. You only want to set, and now I use a lot of statistical terms, I'm sorry about that. What I mean to say is you want to be certain that the version you pick is actually better. You can never know 100%. That's the tricky part about statistics. You can never know 100%, but there's this magical number of 95% certainty that is used throughout the academic community where if I'm 95% sure this is going to be better, it's good enough for me. Because I can tell you if you wanted to be 99% sure, you would need even much, much more data than this. And at some point it just doesn't become feasible anymore. And not even feasible, it's just not practical. Okay, but what should you be careful about? Don't let software influence your testing. Are there people in the room that use Google AdWords? Show of hands? Yeah? Or Facebook ads? Any type of online ads? Now, what you can do with those systems is that you can say, okay, I have five different versions and you know, you beautiful algorithm system that I pay for, you decide which one works best and that you are going to serve more frequently. The problem with those systems is they also rely on the data that comes in and you can bias them very quickly, very early on. So let's say you have four versions and the first version just happened to be by pure chance outperforming the other ones just slightly. So then Google or Facebook starts to show that one to more people and it starts to become a self-fulfilling prophecy. Okay? So you have to be a bit careful there. So the best thing you can do is say, no, please serve all of these evenly and then do the decision afterwards yourself. Yeah? Because then only then you know for sure. So then that's what I meant with don't let software influence your testing. Stopping tests too early, as you can see, can be dangerous. An increase of, what is it? 4% is massive. That's literally that is, sorry, it's not 4% increase, 4% on top once it's like 70% increase. That's huge. That can be a lot of extra money for a lot of return of investment for the amount of advertising you are actually not increasing in spending. Next one, try not to run tests longer than four weeks. Why is that? Well, on average cookies clear every four weeks. Now I don't know if that statistic is still true or if they just increased the amount of time cookies are present. So this was a statistic that came out about two years and this is also just in average. But how does that influence your test? Well, a visitor, if they see one version or another, are checked with that person already participate in this experiment. And the only way they can be checked is by the cookies. If you cleared the cookies, all of a sudden you got a person in who is participating in your experiment again, looking like they haven't, that corrupts your data a little bit. Plus, do you really want to wait four weeks, longer than four weeks to get your answer? Yes. Not anymore? Okay. Well, then I cannot say. Then I honestly, then I don't know. Plus when you use Google, it's not really an A-B testing platform. So I don't know how it goes with other ones. When you use Google, you do it sequentially anyways. Or at least you have to. If you use other platforms, I keep forgetting their names. There's some weird names. I don't know how it goes for them. But I would be surprised that with the new GDPR and everything, that this would make it easier on the platforms. It's not even harder. Okay. So this is something you need to keep in mind. I'm not saying don't run tests longer, but be aware that maybe, I don't know, 10% of the people it says you have were already part of that experiment. Okay. All right. But what is it that we hear constantly about A-B testing? It's like, yeah, change button red to green and you make money. That's not really what it is about. So the whole idea about A-B testing is just you're supposed to try to understand your customer better, your website visitor, the one that reads your newsletter, the one who you want to sign up for, for your newsletter. That's what it's all about. Now it's all term conversion or persuasion, whatever you want to call it. But in essence, you want to, and that's what I really liked about Bridget's talk. It's, you're not supposed to try some sleazy trick here. You're not supposed to try to understand better with some data in the background who are actually these people I'm trying to talk to. That's what it's supposed to be about. Optimizing communication. That's at least how I like to think about it. So don't think of, if I change my button in a different color, it's going to make, it's going to change the world. Okay. So you're supposed to go after the why. What are the motivators? It's going to be a little bit more entertaining. Now I have a video for you. And this was from Brain Games. And I actually don't know, I hope this is going to be with some volume. Yes, I hope. So please pay close attention. Ah, nobody can hear you. Right. You just have to watch. The sound is not too important. Ah, okay. Perfect. Now we hear nothing. Yay, but you just have to watch in essence. Yes. So I'm sorry about the interruption. Did you notice what happened? Yes. Ah, yeah. Okay. I heard one. He became rich. Okay. The head changed. Everything changed. Right. Right. But what had changed? So if I just ask you what has changed? The head? You know something changed. Sure. Yeah, score. That's already, that's step one. Right? So that's what I like, whatever it said. System one, system two. Right? Even if I tell you about it in advance, which I did, this is all the stuff that did change. You know the first time I saw this, I was very proud of myself because I also noticed the head and the handkerchief. I'm like, I got it. I got it. No fooling me. And then I saw the table and the chair and I was like, damn it. Okay. That's, that's, I didn't even see, because you constantly look there and still, okay, why am I showing you this? This is an effect called the change blindness. Yeah. You, you just can't perceive it all at once. That's how it seems to us, but that's not what's happening. Yeah. We have to make constantly decisions of what is it, what it is that we are willing to perceive. So the information we are willing to take in. And only things we are really fixating on, those are the things we consciously actually perceive. Yeah. We can, we can have lots of scientific debates about that later on, but just stick with me so far. Now, when it comes to eye tracking, so basically measuring what people see, we do that because we only see one degree of our visual field accurately. That's a biological property. So that's why we keep moving our eyes constantly. Just to guess how often do you think we move our eyes per day? Just a number. Twenty million times. Oh man. Fifty thousand. Fifty thousand. Well, the truth is most of the time somewhere in between, isn't it? It's about 150,000 times per day. And that's just a rough number. Okay. So the whole idea about eye tracking is you do that so you actually see what people see. So you can understand not only what they see, but most importantly, what don't they see? Because what they don't see, they're not going to act on that. Yeah. It can start as simple as that. So when I started with this whole thing, I was like, hey, okay, cool. The eye tracking can tell us lots more. And then all of a sudden I heard something about eye tracking and especially that it has been labeled the poor man's eye tracker. And I thought, awesome, if this thing can tell me more about the why, then I just do that. But when I started to look into this, I saw some experiments that had very different numbers. One said there's a 20% correlation. One said there's a 80% correlation. And then I started to look in the original experiments and there were some problems. But just to give you an idea, let's say this is your eye position here. And let's say this is your mouse position. So I call this game guess the correlation. When they both are like this, what's the correlation? 100%. That's right. What do you think is the correlation when they both move like this? 100%. 100%, exactly. But if it's like this, and it's 100%, or like this, and it's 100%, I prefer the truth, the actual correlation, if it was 80%, to be this scenario rather than this scenario. But which is it? Because correlation doesn't tell me the whole truth. So we looked into this. Now when it comes to the overlap, I have an example here for you. You see this is the mouse position and this is where people look. There's no sound here, so that's good. Just have a look and tell me what you think. I'm sorry, say it again. It's taking an action. Exactly. So gentlemen in the audience just said this is scanning a page versus taking action. And that's what I really wanted to point out that the process of working with your eyes is very different than working with a mouse. One's about perception and the other one is about then deciding what to act upon, what came in. And there's the brain black box in between. But when it came now to the correlation, lo and behold, there was a correlation. 22%. And I thought, holy shit, how is that? That's okay. Well, data is the data. It's 22%. But then we looked, okay, but who shouldn't that mean if they were actually nicely related that if my eye moves, the mouse should move too? That would be perfect. Then I get this. Even if they might be far away. They still move together. Yeah, that was not true. So what does that mean? That actually means that we only get this positional correlation just because of web page design. And that's it. Which can be quite, for me this was a little devastating because it meant damn you mouse tracking. It was also misleading. So what's the distance between those two? That's the crucial bit as well. How often is the cursor in the field of eye fixation? Because it's one degree. And you have a certain distance. You have a certain pixel array that you always see. So this is the entire distribution of distances across all positions. The largest distance you can have is 32 centimeters just because that was the screen size. And the red line here that is the one degree. And the green line that's if you just double that. Let's not be too stingy. Let's just make it two degrees. And it is still very little. So only 3.5% of the time the mouse is actually in the field of view when you fix it. Or 10% if we're not being too stingy. Okay. So that means correlation doesn't mean overlap. What about the exploration? Just looking around. So we get that. And we get this. So what do you think? Which one is which? Let's start with the lower one. Do you think the cursor is the lower one or the upper one? The upper one is the cursor. Lower one. Upper one. Okay. So the eye tracking is the upper one and the mouse tracking is the lower one. So we wanted to see because it is possible that you scan the page and then maybe the mouse just goes there later. They don't have to go there at the same time. They don't. So we looked at this. So the eye explorers in this scenario was 35% of the screen. The mouse, not so much. Or 0.36%. I've just told to speed up. So I will just do that. That means unfortunately there's only a 2.3% overlap. Even if you don't let things run at the same time. You just do it all over time and then just put them on top. Only still just 2.3% overlap. Which is not that much. So can you just add more people? Yeah, make up for it. That's the nice thing about mouse tracking. Just add more people. Just add more data. Simply because of how information works. So you have, this is called an entropy analysis. It just does analyze how much information is there actually in data and puts a number on it. That's all it does. So each participant saturates that out. If you have one participant you get a lot of data compared to zero. You get another participant you get a lot of data compared to just one. You have 99 participants. Data? Yes. Information? No. That's the crucial bit. So both actually any type of information starts to saturate. But when it comes to mouse tracking it doesn't matter how many more you match or try to add up you won't get to the eye tracking part. That's the crucial bit. Okay. Take home from this. It does provide the eye tracking 8 times more information roughly than the mouse tracking itself. But is it useless? I would argue no. It's not useless at all. But remember what it is, what it does, what it can do and especially what it cannot replace. That's the crucial bit. If you go home with anything this should be the take home message. Okay. What can be actually very useful is the click tracking. And if you set that up nicely with Google Analytics you will be able to know which link led to which page. Okay. So what else is there? And this is now what we did. We started to in line also with a very similar background like AVERT. I also studied cognitive neuroscience and similar what he says 90% of purchase behavior is driven subconsciously. And what we think is nice if you measure subconscious effectiveness to actually select the best design you can also know what doesn't work in the design itself. Not just that it doesn't work but why. So we do that by measuring how people feel because emotions are super important and predictive of what people are actually going to do. We also measure heart rate because sometimes when you watch a movie and it's a comedy you're not going to sit there like this for two hours. You just don't do that or when you're scared you don't sit like that for two hours. But your heart says how you feel and maybe I should have said that before we measure all of these things through the webcam. With permission, right? That's very important. With permission. So to give you an example we did this on a webshop I can't tell you who it was because nobody likes to admit that they had a crappy webshop. They had a conversion rate of 0.5% and of course they wanted to improve their conversion rate, their customer journey. We collected and analyzed data from just 30 people in two days and we ran through two iterations. That's all we did. Landing page and that's important but always control your baseline landing page bad. Cost emotional contempt. You don't want that. That was bad. The call to action was not seen on the product page. Add to shopping cart the call to action was not seen and the checkout page that was good if you make it until there. But we know not many people made it until there. So with just the two iterations we improved the visual overview get rid of the contempt increase the excitement and bam we had a jump of 10% and I don't mean relative 10% 10% on top. So the final result was a conversion rate of 10.5% which of course they were very happy with. And I was very happy with too. And this is just the jump that they made. Just to let you know. This is the average and this is something I also found online. This is the average search conversion rate and as you can see if you have 5% and you're not too bad off but if you want to get here A-B testing gets you there only so far. Let's say standard A-B testing. Okay so I would talk about this I would like to talk about this many many more hours but time is short and I'd like to thank you for your attention if you'd like to read more you can come visit us. The blog is not online yet I mean let's say you can't find it in the menu but it does exist so that's why I just put out the address. Yes do you have any questions thank you. Yeah you're welcome. Sorry about all these numbers in very short time. Well it's I think the engagement was good and you should forget this. I already saw one and we have to be sure because we're going to close remarks in 5 minutes so short questions you should answer one. Okay then one question. So the question was how many of the things that we applied could you have also applied with best practices? So it's not that I have something per se about best practices. I mean there are specific basic rules you can use for a visual layout that makes sense but the problem I have with best practices is if these best practices were to work for everyone everyone should be rich and that is not the case. So it's not always a one size fits all solution I always say if there are things obviously wrong with your site and I can see them already I'm telling you it's not the best but a lot of people like to not only hear opinions they like to have data and that's there was a bit hard for me to accept but that's okay. So thank you for this. You're welcome. Will you be around tomorrow? People want to pick your brain? I will do my best. So maybe in the morning I won't be able to make it but in the afternoon I will try my best to be around. Otherwise if not and please you can always drop me a line you can always reach out. I'm sorry? Well maybe not. I like to sleep in airplane mode. But you can always call. Actually maybe that would be better. Just call. Thanks again. Thank you.