 I tend to walk a lot. I tend to wave my hands about, see if I can actually remember to breathe and talk slowly this time. So my name is Marcus. I'm an engineering manager at Retail Me Not in the United States. And I'm here today because I believe that this group, this population, has generally been left out of a very important conversation in software development. Let me just do a quick show of hands. How many of you know what analytics are? That is about four times as many hands than the first time I did this speech 18 months ago. That is a good sign. How many of you are involved in the analytics at your company? That's about 1,000 times more hands than I saw when I first did this talk 18 months ago because it was literally zero. So it's a good sign. It's a growing sign. For those of you who raised your hands, I'll be telling you things you already know. Maybe a few things you don't. For those of you who didn't raise your hands, it's not too mind blowing. I just want to let you in on the secret that everyone's been keeping from us, us being QA folk. So first, I'm going to tell you what I'm going to tell you. That is what it is and how it relates to AB testing, which is, of course, a big topic of conversation. I'll explain why it matters, why you care. And then give you some examples. And then right at the very end, I'll give you the bonus of actually telling you why it's relevant to Selenium. So I'm the engineering manager for the CRM team at RetailMeNot. If you have ever used one of our properties, which, I don't know, has anyone heard of RetailMeNot? It's predominantly a United States company. We are trying to expand into other regions, other countries. If you have ever been there, if you ever go there, and if you receive an email or a push notification, you're welcome. It's my team that worked on it. But before that, I was the test automation architect for RetailMeNot. My last count, we ran over 90 million tests between 2012 and 2014. It's a fairly low number for a company like Salesforce, for a company of 200 strong like RetailMeNot. It's a pretty decent number. I once calculated that you would have been testing since the 19th century if you would actually run all the tests manually that we ran in an automated fashion. So I'm kind of proud. I've actually been using WebDriver since literally the day after Simon announced it at GTAC in 2007. So I've been a fan for a really long time. And I'd like to think that at least at one point, I knew what I was doing with it. It's been a while. Otherwise, I'm a musician. I play, that's me. I'm the sexy one. Playing at a charity concert a couple of weeks ago in Austin, Texas, quite proud of that. That is my oldest son. He plays bass like his old man. He's being clubbed by his drummer, which I think is a fairly commonplace thing. It's interesting, I think my choice of playing bass guitar, because to me, it's a little bit like being in quality assurance. You don't notice them if they're doing their job. You don't notice them until they're not there. And you really notice the hell out of them when they make a mistake. I don't know why I'm drawn to it. Both professions. So this is how analytics got started. Probably someone in cash register in 1858 saying, hey, this guy just bought bread. That's analytics. That's the first time. This guy was explaining what the user did and trying to track it and record it somehow. A guy shouting to someone in the back room saying, hey, someone bought bread. So primarily the target audience for this talk is B2C, business to, I guess, consumer. Is that right? So banking sites, shopping sites, that kind of thing. Certainly you can learn more from what, but usually I came from a world where we sold software to other software companies before I came to retail me not. And in that, you generally don't have too much of the big data analytics stuff. So that's essentially what we're targeting here with this talk. Hopefully it will be relevant to anyone else also. But the key thing is that it's starting to apply to all kinds of platforms. It used to be just web. Now it is apps, it's mobile web, responsive sites, APIs, anything you wanna do. And you can use it, believe it or not, in ways that are not creepy. Usually people suspect when they learn that you're gathering intel on user behavior that you're trying to do all sorts of nefarious things. I believe that for a very long time until somebody in marketing explained to me that what we're trying to do is not show you crap you don't wanna see and show you good things that you do wanna see. And that is the whole point of analytics is to get to the point where you wanna have a conversation or you wouldn't be on my site. I wanna have a conversation so I can make money. Obviously there's a thing that we need to do and I wanna make sure that you are so low in the funnel that you only come to my site and you only see the things that you wanna see. And that's what the whole thing is. This data, these petabytes of data that we're gathering are trying to tell us stories. And this presentation is about how to make sure you're gathering the right story. This is what analytics looks like at a micro level. At the level you care about. There's a lot of data packed into there and those ellipses at the end, that means that it's not, I don't even think this is half of the entire URL. You go to any website, there's a decent chance that this stuff will happen. How many of you have ever tested that URL? Okay, that is eight more people than last time I gave this talk. I thought I was the only one. I'm getting some sort of notification I think that this clicker stopped working. The clicker stopped working and now I don't know what happened. You wanna pull some analytics to find out what behavior I displayed? Yeah, seriously, no idea what's going on. I'm trying not to watch while he does this. All right, so these are various companies that offer analytics, more or less who cares, but these are people you probably heard of. That had some words on it before. At the top of the funnel is a number of hits you've got. So if someone comes to your website, they've hit it. You may or may not care. Second down the funnel, I really don't know where the words went, is sessions. Somebody's on your site, they display some behavior, they click on a thing or two, that you start to wonder if there's a pattern. And then at the bottom of the funnel is where you really start to care, which is where users start to interact. And that's really what we're trying to figure out is, when does a random piece of traffic become a human being? When does it become a user that I care about? I was at a conference a little while back dealing with my actual job, the job of sending out emails and building relationships with customers. And somebody saw that I was from Retail Me Not. They said, hey, I enjoy going to Retail Me Not, but I don't feel like I'm your customer. This was a long time ago. And it just sort of like hit me right in the face, like how do we build relationships with people? How do we make them feel like customers? And the key is that every time you go to my site, no matter what you've done before, you're still seeing the same stuff. And so we started to ask this question, how can we make it? Well, I don't care about Ugg Boots. Marcus Merrill does not care about Ugg Boots. I do not care about lots of, I don't care about pencil skirts, I don't care about a lot of different things. I care about, I don't know, televisions. I care about arduinos, little things like that. When I go to Retail Me Not, I want to see stuff like that. And that's what we're trying to, that's the kind of story we're trying to gather here. I'm really starting to think maybe my slides were not meant to be. See, if I could just use the slides as reference for myself, and I could actually do most of the talking without imagery, believe it or not, not the worst AV problem I've had in one of these things. I was going to be karaoke and I was going to win. I was going to sing Mac the Knife by Bobby Darin and I was going to win that karaoke contest. At about four o'clock in the afternoon, they came and said, there is a problem with the site. What happened was, we had shipped an algorithm change earlier in the day that changed the order in which offers are presented to you when you go to the store page. So normally we say, we take our top performing offers, we take the offers that are being paid for to be displayed as an advertisement. We have all these factors that go in in order to show you what order the offers are supposed to be in. They thought it was a good idea, they shipped it, went out to production and within about eight hours, they said we lost, I don't know, $150,000. Why? Well, there's an image here somewhere. I don't know if it's on there. Yeah, right there. The target offer, $5 off 50. That's the top offer, always top performer, really, really good offer. Suddenly started appearing 15th on the page instead of first and that was not just on that one offer, on that one page, it was everywhere across the whole site. Why did they do it? Because nobody said, let's think about this, let's study it, let's use data to tell us whether this is a good idea. They just pushed a button, they shipped it. I didn't get to go to the karaoke contest. We had to stay there until midnight and fix this thing. So, you know, my personal tragedy aside, this is sort of illustrating the importance of AB testing. Who is familiar with AB testing? I'm expecting to see a decent, little bit less than I thought. I actually thought that, I've always thought AB testing and analytics go hand in hand. The fact that they don't is probably good that I'm talking about it. AB testing is a way of saying, I want 90% of the people that are coming to my site today to see the thing they always see. And I'm gonna make 10% of the people come in and see this other thing, a variant. Because I wanna see side by side comparison, what works, what doesn't. That way, I don't have to risk losing $150,000 in an eight hour period. I could lose $15,000 in an eight hour period and switch it off, nobody's the wiser. We had a 404 page once a long time ago that just said 404 page not found. Typical thing, sorry. Somebody had the bright idea, hey, that's a big, wide page that says 404 page not found. And then nothing. What if we put offers on that page? We might make money. But we also might piss people off. Like that was the question. We don't know whether or not it would drive down engagement while it drove up traffic and revenue. We knew it was gonna drive up revenue. If there are links there, people are going to click them. But does it also drive bad behavior? So we put offers on the page. A few days later, we saw these little things. The lines are moving kind of up, generally up. The thing we figured out though was that at no point was the red line below the blue line. And this is kind of a strange graphic but it's about offer ordering and stuff like that. Anyway, I don't wanna get into it. Essentially it told us that on every metric possible, engagement was a net lift. It was not declined. So we promoted that to the 404 page now. If you fat finger a URL on retailmonad.com, you see offers and we make money. Which is nice. Another thing that happened, this is just six weeks ago. This is me, 20 years in the QA profession. Two years as the engineering manager on the CRM team. I should know a thing or two about how this stuff works. But we shipped an AB test where the B variant had, there was a field that had not previously been nullable. Somehow we thought nullable was a good idea there. I don't know how we introduced that into the equation. And for the entire two days that we ran this test, the B slice which got 10% of the emails had nulls in the offer value somehow getting persisted to our analytics database which means that for the entire two day period, we don't know what people clicked on. We have no idea. The data was gone forever. The fact is that that happened in an email program because we haven't got our automation story set yet in email. On the desktop, this would not have happened because we've had our automation story set for a while. I'll talk more about that later. But the main thing is that what we used to have to go through is a period called the Inquisition where if you designed an AB test, the B variant had to go through a set of tests and you'd sit in a room with a vice president, two directors, several team leads, several QA people, and you would run down a checklist of did you test for this, did you test for this? Basically it's regression testing by committee including do the analytics work. We haven't been able to set this stuff up for email yet so we still make mistakes on that occasionally. So this is an example of how you can use analytics to tell more stories that you want. So shopping cart, frequently when you bought this we were gonna suggest that you buy that other thing. We've recently started personalizing our emails so that if you have shown an affinity towards shoes, we're gonna start sending you shoe offers. If you start showing infinity towards Walmart, we're gonna send you Walmart shoe offers if we get them, the convergence of those different things. And this is all stuff that we were able to gather via analytics. Finally I'm gonna get to the part where we fit in. Normally the way this works, in my experience, I would love it if anyone told me after the talk whether their experience has been better, maybe probably hasn't been worse but typically the way this works is that marketing will go to the business analytics team and they'll say, we wanna figure out our ROI, we wanna figure out how they add placements we're doing. We wanna add some strings, some tokens, some way to measure the clicks and the engagement on this thing. So the BA goes to the product manager. Product manager goes to the dev team lead, they create the user stories. QA is probably involved at the planning meeting the first time when these things come up. Truth is they probably start to glaze over when they start talking about this kind of thing. Maybe not, maybe I should give you more credit. I glazed over, I'll tell you that much. Typically in my company, at least at the beginning, not anymore, this assignment will be given to the newest person on the team. How do you onboard a developer? You give them the analytics stuff. Partially because it's boring and the more experienced developers don't wanna do it and partially because it is a pretty good way of learning how your business works and how all the code matches together and everything. And generally it gets tested, if at all, by the person who wrote it. So back to the interactive portion. Raise your hands if you've been given an analytics story to be tested along with a new feature. Okay, that's like five people. Do you know that this is going on in general? In general this is something, is it something that you believe you could be or have been involved with? That's the main thing, is let's start talking about this. I don't wanna spoil the last slide, actually it might not work anyway. This is my CEO's very favorite topic. You catch the CEO in the hallway and you start talking about analytics. He is going to listen to anything you have to say. And the real problem is that, like I explained earlier, sometimes we mess things up in the analytics. Sometimes we make gigantic decisions based on something that's been firing incorrectly for four months and nobody knew about it. This is the kind of thing that happens. I'm not gonna say it happens all the time, but when it happens it's like the base player, man. You know something is wrong. And the real problem is that for any given feature, for any given page on RetailMeNot, I'm betting we're gonna send 200 separate events based on your behavior. What did you do? What are you doing now? Can we predict what you will do? That kind of thing. So my pitch today is let's not get left behind in the big data conversation, in the analytics conversation. Let's make sure this stuff works functionally and let's get time on the calendar to work on it. That's really the main thing. Once again, all the stuff I'm saying is based on my experience over the last two years working at my company and talking to people who are involved, at least marginally, in the same conversations. If I'm wrong about this stuff, I wanna know about it, but I have yet to be told I'm wrong about any of it in terms of QA's involvement. Go to any website. You're gonna find these beacons being fired off everywhere. It's interesting to try to go in there and try to decode what's going on. Normally, if they're smart, they will actually start using UIDs, so they'll have key value pairs in their URLs that'll say B21 equals 74582, which really just means he typed his name or something like that. It's just UIDs, so you really can't decode these things like you used to be able to. But it's really interesting. You can at least tell when they're going on. You can sort of see what they're doing. Sometimes it comes in the form of an invisible pixel that'll get planted in the top left corner of a page. Nobody would ever see it, but you can see it via proxy or examining your network traffic. But yeah, and retell me not, we fire like three of them. We have an internal tool, we have Google Analytics. We used to have a tool called Omniture. We got rid of it, started doing our own stuff. I could do an entire session on how we structured the architecture behind how we gather the analytics. It's not Selenium related, so I didn't want to go into that stuff. Here's a really interesting example of something that happened. This is actually what introduced me to the topic of analytics. So if you look at that page, imagine that on the top offer is the 17th offer on the page. Second offer is the 18th offer on the page. So there's more up above, I just didn't include it. Then there's a break, unpopular coupons. We leave them on there because people still click on them. Sometimes they still work, sometimes they don't. They're less popular. We figured out that we lose a lot of money when we don't include them. We did that via an A-B test. We tried to remove them once, we lost a lot of money, put them back on, right? So we have a page break and then we say B-21, which is sending, which is trying to give you an idea what is the offer order that you clicked on? What's the offer you clicked on? If you click on that first offer under the break, we say you clicked on offer zero, which means it was the one up at the top. That's a bug, isn't it? That's the kind of thing that can get that offer promoted to the top of the page. Hey, everyone's clicking on this offer. They're clicking on the top offer on the page. We should put it at the top. That bug right there is the genesis of this entire talk because I figured out that QA has a role to play and is being left out. And we automated the ability to test this kind of thing through every single store page algorithm change that we make. So let's talk about that. Manual testing of analytics sucks. I cannot emphasize that enough. You open up the network traffic analyzer on Chrome. You have to find the URL of the thing that you clicked on. You have to figure out which one it was. You have to take it into pretty print from something. You break up the request into ampersand delimiters. You look at the key value pairs. It is awful. And we still have to do it to some extent. Then you get to the point where you have to say, well, sometimes stuff works differently on Chrome than Firefox. Sometimes it works differently on Safari. Sometimes it works differently from mobile Safari or mobile app or any of those things. And so multiply all the different ways you need to test these things and then usually you just want to bury your head in a pillow. So we use a proxy. How many of you use browser mob proxy or any other proxy to do your kind of testing? All right, so hopefully after this talk you'll go back to your place of work and you'll chase down the person who knows a thing or two about analytics and you'll say, how can I help you? Where have been your biggest problems in data loss with analytics? And then you'll go and you'll find Dave Heffner's notes on how to use proxies in testing with Selenium. What we do is this is the first order of testing. Then I'll tell you about the end to end process we've implemented later. And how am I doing on time? I'm all right. So what we will do is I had a code sample ready but I don't want to touch this thing. I'm not at all comfortable messing with the computer right now. So we have a line of code, we do Java. It says basically open up the proxy for reading. Then we perform our Selenium based actions. Open the browser, click on the page, click on an offer on the offer title. Stop recording, find in the proxy readout where the request was made that we're looking for. And then use Java libraries to break up the URL, find the, break it up into key value pairs in a map. Find the key that we're looking for and make sure that the value is what we predicted that it would be. If we clicked on the 17th offer on the page, make sure it says, well, 16 if you're doing off by one. So then you verify that. And so when we do this, we're able to test all of those analytics just by performing a single, page load, click, analyze the thing. Assert on all of those different. I've got a test with 30 different assertions. And it was sort of the first one we did. We figure out there's probably a way to do it a little bit more intelligently than just manually extrapolating all the assertions. But anyway, this was early iterations. Point is open a proxy, do the action, close a proxy, and analyze. It doesn't take much longer at all than running a normal Selenium test. And chances are, you could interlace this with your Selenium tests that you already have running. Because all you're doing is adding this extra. You could probably turn it into an annotation or some other event that happens. And say, while you're in there doing the thing you're already doing, start putting all these requests into buckets and break them up and see, let's set the screen saver, break them up and then see if all of your analytics appeared as you expected. Once again, the text is not showing up. This circle says what I just said. So here's the next piece that we did that I really would like to get released into open source, but I guess I've just been lazy. I don't know. Maybe we should do this part of a hackathon. It's a very simple system. We have a system that we call Panumbra, which, if you don't know, is the term for when you're looking at an eclipse of the sun, you see this corona around it. I believe that's called the Panumbra. It's everything outside, but just in the area of the thing you're interested in. It's a method that we use for black box testing analytics. So what we'll do is we will run our normal Selenium tests. We will compose a blob of JSON. That'll say, here's the event name. Here's the analytics thing. Here's what we expect. And then we'll say, within 30 minutes, otherwise, alert in merrill.rmin.com. And essentially what happens is we will perform an event and then we'll send that blob of JSON, the expected value to a web service, an endpoint, that says you should be expecting this JSON blob within 30 minutes. If you don't see it, send a pager to merrill.rmin.com. It's just that simple. The next phase along the process is the data goes through a series of munging. In our particular case, its web server receives it in the form of something we call the analytics redirector, which does exactly what it sounds like it does. It gathers the analytics, then redirects it to somewhere else. That then throws to an Amazon Kinesis stream, which then gathers everything up into some other bucket. There's some munging that happens along the way with S3, and this is at massive scale. And then it gets deposited into a final resting place we call the data lake. That you're able to query like a data store. I believe it is Mongo. I could be wrong. I'm sorry, Zach, if I'm wrong. So we're then able to query that. We have then an event listener at the end of this phase which is looking for the event that you were just threw up earlier in the notification service. And then it will try to do matchmaking on all the events that you tagged as penumbra events. It's almost like a di-trace when you're taking an x-ray. You drink that horrible fluid and it goes through you, your system, and you're able to use an x-ray to illuminate all the various areas that you're trying to illuminate. And so these are events that you tag going in and then you have an SLA that says within 15 minutes or within 30 minutes, this piece of data should show up on the backend system. That way you know everything along the way is working fine. If it didn't work, you get an alert that says I clicked on this offer at this point. What happened? And so you can start to investigate all the logs along the way. And this is a system that we were able to put in place a while back because I'm sure you can all identify with this, the combinatorial explosion that we experienced when trying to figure out all the different ways that our analytics were being tossed made it to where, I mean we'd have to have the best machine learning in the world in order to be able to analyze all of this stuff. So we started to do sampling instead, we said. We're gonna, this time we're gonna try Chrome offer title, coupon offer versus code offer, all these different combinations and we would start to throw these events at the Penumbra system. And then we would see them out on the other end. I have a dashboard right there which is what you would get in the end. You'd say these are all the different ways, these are all the different events that we're throwing. The green means that they worked fine, the yellow means they took a little bit longer than they should have, the red, very few of them are the ones that actually did not show up. If you notice here, no, not there, here. That was a day that the system died, not the Penumbra system, the underlying system. This is an in-production, end-to-end test. And that's the day that we had an outage. That is also the day they told me to start throwing slightly less volume at the system if you'll notice the volume drops after that. So once again, this is a little bit of a hodgepodge way of saying you want to get involved in analytics. Your CEO wants you to get involved in analytics. I mean, it's hard to explain in words. I have so many stories of people who would come up after this system was put in place and they're like, wow, we have a QA department. I didn't know. And so it gets you on the map, but it also helps you help your business with the million dollar decisions they're making with A-B testing. And there's this shadow parallel software development thing around analytics that happens where we're kind of left out of the conversation. You're worried about whether the button is the right shade of green or whether it takes you to a 404. You're worried about whether or not if somebody enters their name incorrectly, it's gonna be validated properly. This is the stuff that we should be testing right here. If you're in B2C at all, this is where your company can make or break a lot of stuff in a lot of ways. And it just buggies any other kind of software. There's so many moving parts. So anyway, that is most of what I got. I think we're, a little time for Q and A. Any questions? Anyone? I wanna hear some stories too. Anand? I believe so. Yeah, it should. Browser Mob has, yeah. I mean, you're doing this in your own firewall, browser, environment, area. It should all, yeah, yeah, it should be. So, yeah. Sorry, the test cell, the usual age. How long your AB goes? Oh, we do AB testing. We used to do it for a week. Now we do it for, depends on what kind of test it is. If it's for an email, we do it for two weeks. If it's for a web desktop, we do it for several days. The problem is that in general, we get better offers on Tuesday because people are trying to get people into the stores on Tuesdays. People are already shopping on Saturdays. So it varies the nature of the test as well as the nature of it. So it's very specific to the business, but it can be a few days, it can be a few weeks. Because it adds overhead to the test time. Yeah. Until it is active, you need to make sure that it's working fine. Yeah, right. And how about taking it out? So once you decide that this was working and you want to make this as a control cell, you need to take out some of the condition that you would have put in the code. Probably, in my experience, what we did was when we were introducing a new B variant, we wouldn't generally, 90% of the time, we would not change the analytics. We were just trying to make sure that the analytics weren't broken. So we would run the A-B test through the same test automation analytics process. And if it broke, then we knew that we had to change something. But we didn't, we've never spent a whole lot of time trying to write automated tests against a variant before it won. Mostly we just do regression testing against it. Okay, thanks. I mean, sometimes the variant is going to break your automated test. Just happens, but in general, it's not, it wasn't too big a deal for the analytics portion. Yeah? Use what? Throttles? Yes, yes, we have a, it's in-house made, it's called Vegematic. It's something that actually diverts traffic at a very measured, metered fashion. And we make sure that we know exactly who's going in who's not. Yes. Yeah, absolutely. We use a combination of analytics. So the question was, do we use a tool to measure the success or failure of the A-B test? And we have a whole team of people who do that. They're measuring, the whole process I explained with the Kinesis stream, there's like six different kinds of data that go across the same kind of pipelines called Overlord. And we watch all of that and all of that kind of stuff tells us whether or not we had Lyft. We can measure it to the point of last night there was an email that said, this test gave us 3,880 clicks. The control was 3,860 clicks. So, like, we know exactly where that is. It's not a third party tool, the stuff we built in-house. No, it doesn't. It's only metal. Yeah, we have a whole team of people who are sort of switching, turning on and turning off things all the time. We run six, seven tests at once usually and we're just sort of slicing in and they somehow keep all of it straight. I don't quite know how. But yeah, Google Analytics. Have you automated Google Analytics? We have implemented it, yes. We have Google Analytics. In fact, a lot of our automated tests even test to make sure that we're sending the right information to Google Analytics. Our problem as a company, and this is sort of outside of this scope, but scope of this talk, but our problem as a company was that Google Analytics is sampling. They don't give you 100% of all the events every time. They give you some of them, a sample. It makes sense, but yeah. Sometimes the data layer, we need to check the data layer variable in the console. So whether the correct data is passing to the variable so that GA will pick that one. So have you ever tested that object, the data layer, have you automated how to test that data layer variable by using Selenium? Some of them. So some of those are not predictable. So some of them you have to just basically say, I just wanna know if that one was there. Was it null, was it populated, was it gone? But sometimes if you can predict it, we would try to predict it and say, we know exactly what this value should be, and we would break up the URL and compare it and make sure. We had a whole abstraction library built around the Google Analytics request. And then we had a whole abstraction library around our own request. Basically, so our whole, it's a whole different topic, but our whole drive with our automated testing is to try to make it to where two or three people on the core automation team have to do the hard work, but the testers writing the tests have it easy. So we actually wrote our libraries to be able to say, you know, GA, Google Analytics dot, you know, is username present? You know, that way they didn't have to do anything at all complicated or know what they were trying to ask. Just wanna know if it's there or not. Any other questions? Yeah. Grid, this is running in not only Selenium Grid, but our own special auto scaling Amazon web services grid. The question was, are we running in Selenium Grid? The answer is yes we are. We run 1,300 tests in 24 minutes. Awesome. So I could do a whole other talk on that. Do you see any, Invalid results when you run in the parallel sequential order? I don't understand, sorry. So when you run in the grid, parallel testing, parallel run rate, so do you see any difference between when you run in sequential order or when you run in the parallel? If I understand the question correctly, do we see any difference between running in parallel versus running singular? No, no difference whatsoever. They all run in separate threads. They each browser instance has its own browser mob port dedicated to it. So when you're running in the grid, when it reserves a browser mob proxy, it says start the proxy, it just underneath the covers, it will say reserve this port and for the rest of this test, until I shut it down, that's my proxy. Nothing else still will go through it. So we've never had any trouble with cross-traffic, cross-pollination, anything like that. Hi, how do you test external tracking? For example, if you have your pixel of retellme.not on Google page and someone search for like Pizzahut offer and it clicks on it and how do you do A, B testing there that that user is not logged into retellme.not but actually sees the same offer when it comes back again. How do you track users? How do you test that? I'd have to... I'm not sure how to... That would be an interesting conversation to have. I know that what we've tested in the past is we've just tested to make sure analytics are flowing in and we sort of watch for an amount of traffic and if it drops a lot, we would say it's bad but I haven't gotten too much into the weeds on that one. That's interesting though. It's an interesting question. Thank you. That's the kind of thing that hasn't cost us millions of dollars in the past. So it's not been quite so pressing. So yeah. Well, for our normal automation, so the question was, analytics sometimes causes performance issues. True story. What we do in our selenium testing, it's considered a best practice by the browser mob folks, is if we see a request that's going to Omniture, we cancel it. When we're doing normal selenium testing, because during normal selenium testing, we're trying to test the functionality of the site. We don't care about that stuff and we don't want our tests to be slowed down so we just kill those requests. So in terms of our testing, yes, it slows us down. In terms of our users, the reason we got rid of Omniture is because it slows them down. We used to have a share on Facebook, share on Twitter for every single offer on the page. We stopped doing that because it was killing our Google rankings because of page performance. So we have a very small amount of patience for that kind of synchronous request. Everything you do on Retail Me Not is asynchronous. When I first got there, the process we were going through was you would click on an offer and it would make a request to a service to make sure it recorded it and then it would make sure it came back with a proper yes we recorded it and then we would load the rest of the page. No, no more. Send it to RabbitMQ, forget about it. It's no longer the page's responsibility. And so for all these analytics things, I think the primary reason we designed our own system around this stuff was so that we could send it to RabbitMQ and never have to think about it again. We have very limited patience for anything else that our business thrives on performance. We die without it. So, yeah. In the midnight, they will be popping up different offers and different data in the website, right? So from the QA perspective, they don't know what will happen tomorrow. Right. So a lot of offers, a lot of different pop-ups will be coming up based on the metrics that the business team got it and they will be pushing a lot of offers, right? How do we ensure from QSI that, okay, this will become, we don't know what percentage of traffic enabled for that offer? Well, we don't try to get that much into the weeds. We just try to make sure that this analytic is trying to tell us something about what the user did. We're only trying to make sure it got recorded correctly. If something gets messed up, I think along the lines of what you're saying, it would just sort of fail and we'd say, you know, we can't automate this right now. We can't, I'm not sure I'm answering the right question, but it's most of it. Thank you. Yeah. What other open source tools have we used? So mainly we are a Java stack. We are also starting to introduce JavaScript, Node, Nightwatch. We do a lot of stuff with Amazon, so not open source, but AWS. We're Maven, we're Spring, we're kind of the typical old school Java stack that Brett Pitycourt hates. What else? Proxy, browser mob proxy. Trying to think of anything else. Most of the stuff we have done around analytics itself, it's homegrown, the idea was always to make these tools available open source, but then I got a different job and I didn't do it. So I would love help. Anything anyone wants to do. So I think we're at time, but I want to thank everyone for coming out. I'm glad we were able to talk about this stuff.