 Very good afternoon to all of you. My name is Anuj. I'm a part of Intuit AIT, and today I'll be talking on how do you go about building a system that can learn from their mistakes, and then try and improve on a continuous basis. This was a part of work I did when I was at Freshworks with a couple of other folks. I'll quickly talk about the problem statement, and then we'll start to look into a couple of things. Customer support on social, as of today, like a given thing, every big B2C brand, you have to have a customer support on social. Now, what does it mean? It means that your users are going to reach out to you, they're going to tweet you, they're going to write on Facebook, and then they will expect a response or support from there. Brands have also adapted to this new culture that has come up, and a lot of these brands have opened up handles which are dedicated for support. So here I've given examples of three, but for almost every major brand in the world, you'll find a corresponding support handle which will have things like support, help, and stuff like that. As a matter of fact, the platforms like Twitter and Facebook have also recognized this, and they have actually launched dedicated feature to support this ecosystem. Here, what I'm showing is a screenshot from Twitter and Facebook. What you see from Twitter is whereby you can check whether this is a support handle, and if it is, then certain specific tags are going to appear around the handle. What is the operating time that your team is more likely going to operate? Likewise, for Facebook, you have a batch called very responsive batch. So if you are a very responsive brand on Facebook, Facebook is going to give this batch to your page. And most CRMs as of today have started to support have started to support customer service on social, which means they'll have dedicated workflows whereby you can connect your social handles, respond from there, and go about doing your job. Now, since it's social, everything is public, and it's a well-known fact that there have been a lot of instances where either a very good service or a very bad service, and people have taken screenshots of it, and things like that have gone on to become viral. From a brand perspective, they care about two things when it comes to support on social. One, reply fast. Second, reply well. For the purpose of this talk, we'll focus on reply fast. However, the combination of the two is what contributes to the image of a brand. Those are not the only factors, but that is also a factor how a brand is perceived. Now, how do you go about measuring the speed of response? There are two key metrics. One is called average first response time. How much on an average, how much time do you take in order to send the first reply to a customer? And the second one is average response time. On an average, how much time do you take to send any reply? Conversation if to and fro. So there is always the first conversation, which is an inbound that you reply, and that way comes the first response time. And brands have taken this to our next level. Here is a screenshot from airline, whereby they are wearing their SLAs on their sleeve. So this is the primary picture on the Twitter handle of KLM, whereby they're saying that if you write to us right now, we are gonna respond in 23 minutes, and this time keeps on changing, and it keeps on getting updated after every five minutes. Likewise, Facebook has set standards as to when we're gonna say that you are a very responsive batch. So they say that your response rate has to be 90% or more. That means to 90% of more inbound traffic, but this is for DMs, not for social posts. And you have to respond within 15 minutes. If you do these two things consistently, we're gonna award that batch that I spoke about. Now, so I am a part, I mean, I was a part of two CRMs, one was AirBoot, one was Freshworks, and our product is used by other brands to talk to their customers. And sometime back, what started to happen is that lot of our customers were coming back and asking us that our metrics, which is the average first response time, and the average response time are typically high. We actually purchase your product, we set up processes, now our team knows the tool. Despite our best efforts, we are not able to bring these numbers into reasonable range. And the ask for us was, can we look into as to how can we go about helping these brands from a product perspective to try and reduce these two metrics, right? So what we started to do was, we started to follow our users who are basically customer support teams who sit on our platform and then engage with their customers. So we started to follow these people, go physically on their sites, sit and shadow them, what exactly do they do, how do they use, what is their processes that they follow. Also, we did a lot of things using remote screens, whereby we kept a constant watch on what exactly is going on and what do they exactly do that is leading to such high times. One of the things we noticed first off that the kind of traffic that comes to these channels, if not just questions or requests, lot more things come. And here are some examples, I mean these are handpicked, but here is a guy who is talking about some CD being released on Walmart store. Here is a guy who is talking about a city which is celebrating some particular function. Here is a guy who is explicitly asking Apple that I have some issue with the product, can you help me? Here is another example of it. Now, if you look carefully at all these four, one of the first observation is, I think if you are a customer support guy who is getting all of this, are you gonna respond to all of them? And clearly we will because here somebody is asking a query for a product which as of today is not available on a store and he or she would like to know when it will be. So there is a genuine intent to engage with the brand and get a query result. But for all the other three, there's not what a customer support guy can do anything about it. I mean the person just chose to mention the brand and their handle, but there is no intent to engage with the brand. So for the purpose of this talk, I'm gonna call the kind of messages that we looked at first as actionable. An actionable message is something that a customer support agent is required to act upon and respond to. While everything else I'm gonna call as noise or spam. Another observation that we made was that a agent typically in one hour will not send lot of reply. They'll not send more than 15, 20 utmost responses. Even though the number of responses sent are low, yet the average response times continue to be high, which means there is something else going on in the middle. Another thing that we looked at was that of all the inbound traffic. So in the CRM world, every conversation is converted into what is called a ticket. And the idea is to respond to the ticket and close the ticket. What we saw is that not more than half utmost were being responded to. In some cases, not more than 5% of the inbound traffic would be replied to. That means a particular handle is getting 100 messages and of those, only five are being responded to. In some cases it was 40 and then there was a whole spectrum in between. What we started to see was that if I look at noise and I look at actionable. So if the bar is a total inbound traffic and blue is my actionable and red is my noise, the ratio of actionable to noise or signal is to noise was very, very low. So a large part of traffic was actually noise. So X-axis here are different brand which are clubbed according to the vertical. Another thing that we noticed is that between two messages that were responded to, there were a lot of messages that these agents would not typically respond to. So these responses were sporadic in nature across the inbound traffic. So if I take a traffic and organize it by time as a inbound traffic, then there will be only far and few that will be responded to. Now when we started to look deeper into it, what we realized is that most of their time was going into consuming the tickets from a flow whereby they will open a ticket. They'll see I'm not gonna, or I should not be responding to it. They'll add some metadata to it and close it. They'll open another that will again turn out to be noise and so on and so forth. So a lot of time was actually going into finding what should be responded to rather than actually sending out responses. Now this is a classical setting of building a spam filter. You take your data and you model this problem as a binary classification. I want to segregate noise from actionable. Now what are the typical steps we'll do? You will acquire a data set. You'll find some nice features that will help you to do the task. And in this case, there were a couple of them along with the text. If you look at things like handle, hashtag, URL, emoticons, you'll find there is a remarkable difference between the number of these entities that occur in a noisy data as against your actionable data. And we actually went ahead, built a model on top of it, and we deployed it, great. And we got a decent number to begin with. Once the solution was in place, we started to see some interesting observations. One, the performance varied dramatically across brand. So every brand is a customer for us, right? So every brand has purchased our product whereby their customer support sit on the product and work. So every brand is a customer for us. Now for some of the brands, the model worked very well. Now this is a single model which is working for all the brands that are there. While for some it performed really, really bad. Another thing that we noticed is that as time went by, even for the ones that it was doing decently well, the performance kept on degrading over a period of time. And the time here is in weeks. So what I've shown is like, this is one example of it where it started from some number and as we tracked the precision of this particular model, it kept on dipping under and kept on dipping. The blue line is what we defined as an acceptable number. Anything below that, that's kind of red flag for us. Now let's try and understand why this was the case. What is exactly going on? One, what we found is that the data is changing its nature and changing fast. So the four key features that we took which were emoticon, URLs, hashtags and handles. So what is the average number that we are seeing in traffic on a week by week basis? And you'll see there are a lot of fluctuations as compared to look at where it started and within a couple of weeks where it ends. Now when these features are changing and changing at a fast rate, that's gonna have implications on the performance of your model. So what we basically saw were these were non-stationary distributions. Second observation which was very interesting is what we found is that the world of customer support on social if not just binary. If not just black versus white or noise versus actionable, there's a lot that goes on in between them. And it's actually a huge spectrum rather than just the two blacks and whites. And let me give you some examples. People come, talk to brands, they will say hi, hello, good morning. There are, and it is surprising there is a huge population that actually dived in and this is across the globe. People would have queries like, do you have any offers today? Now is this actually a support query or not that can be debated? People will give feedbacks. I recently saw an ad from you which was great. I recently read this about that which was great. Is a customer support guy really should be responding to it? I don't know. Then there would be engagements, campaigns where you would say, can you post a photograph of yourself with X and post it with hashtag and then agents are expected to respond to that. Thank you or whatever. But there's an engagement component as well to it. So all of this, if you observe carefully is neither noise, nor actionable. It lies somewhere in between the two. And interestingly, the definition changes from brand to brand. There are brands which would respond to it. So for them, this is actionable. There are brands which would never respond to it. For them it's a noise. Which means noise and actionable are merely two extremes of the spectrum. And the very definition of what is noise, what is actionable, if not consistent across the brand. It varies from brand to brand. Which basically means that if you think in terms of machine learning, what we are saying is that there's no single boundary that separates black from white. The boundary lies in gray. And every brand has its own boundary. There's no single boundary that's gonna do the job for you. And the moment you see this, it's very clear why a single model for all is not gonna work. It is doomed to fail. So what we have reached so far is that one, a single model for all is not gonna work. Second, there is a non-stationary nature to the data. And somehow you need to handle that. And Twitter is not the only world where such kind of data shows up. There are a lot of other fields as well where this kind of a nature keeps appearing where your data itself is changing its nature and changing at fast rate. So we then went back and started to think we need to design a fresh system and how do we go about it? So one thing was very clear. The only way you can solve this problem is you need to have a model for every brand. Now, one way to do this is I go to every brand, collect their data, build a classifier from it, and then put that in production for each one of them. Clearly that's not scalable. For, I mean, whether I will get the data or not get the data, even if I get the data, quality of data, quantity of data, there are a lot of other issues. So what we actually started to think was slightly different. We said that if you look at a model, this model makes a mistake. Now, what does that mean? It's gonna take an actionable, put it in spam, or it's gonna take something which is nois or spam, it's gonna put it in actionable. From the product perspective, we know for sure that if it was marked as actionable and not responded to, we were wrong. If it was put in spam, and maybe after some time somebody came and saw and replied to it, it was an actionable. Again, we have a very clear signal. So we have a feedback which is implicitly there in the product. Can we exploit that feedback in some way and see where it takes us? Am I clear with the problem statement? Where are we heading? Okay. Now, why this may be a nice idea. The intuition was mainly the following. Now, we'll start with the generic definition of where the boundary is for a particular brand. And then as and when we make mistakes and we are getting this feedback, can we learn a better boundary from there, thereby adapting to their definition of what is noise and actionable without actually anybody explicitly defining what is noise for us and what is actionable for us. All you have is this signal, whether it was responded to or not responded to. And second, because you are absorbing this feedback, can you handle the variations in the data that are happening underline? Now, how do you typically incorporate feedback? The typical idea is that you take a batch of the fresh data that came in for which you did mistakes. And then you add this to your training data and retrain from it. Now, in our case, we have close to 45,000 customers who are using that product. I'm talking of training 45,000 model or retraining them on a pretty frequent basis. This is gonna be computationally heavy because I'll have to take the entire data and then retrain and go on from there. Second, the moment you talk about retraining, you risk that whatever was learned previously would be far, far better. It's difficult to mathematically capture what, I mean, what is learning in machine learning, but you understand what I mean by it. Instead, what we did is, we said as and when we'll make mistake, we'll learn from there and then we'll see where we end up, okay? So with this, I'll come to what worked for us. So what we actually ended up building was, instead of having one model, we had two model for every brand. One is called global, one is called local. So global is common between all the brands. Local is a specific model for every brand. So if I have two brands, I and J, I'll have a global which is common for both I and J, but I'll have a local I which is specific only or meant only for the brand I and I'll have a model local J which is meant specifically for brand J. Okay? Now, how do we go about doing this? So the global was trained on a large data. So we took a large copies of data across the brands. We already have the signal, so we already have the label. We trained the whole thing on this and there are no short term updates that are made to this model. So this model for a couple of months stays the way it is, the static. While the local is a specific model for each of the brand, that couple of nice properties. One, it is a fast learner. So computationally, the amount of time it should take to train should be very, very less. Second, we do a lot of short term updates to it. And then we take some judgment from here and some judgment from here and then try and combine the two. Now, what kind of a local should we go for? One, it should have this property that as we give more feedback, it should improve. And I'll come to what exactly I mean by the word improve. Like how did we go about capturing it? And what are some of the nice properties it must have? One, it should be fast learner. So the amount of time it takes to train it should be very, very less. Two, as we incorporate feedback, it should be successful most of the time and what does it mean? That if I present a data point for which it made a mistake and I say, okay, learn for this. If I ask the same question again, you should try and get that right most of the time. Second, it should not happen that as you feed a point and you update on it, it should forget everything that is there or whatever it has learned until now. And so this is the feedback loop that we built. So there is a social media post that comes in. The model makes a prediction. Depending on the prediction, the customer support agent consumes this, right? Either he's gonna respond to a ticket or he's gonna move the ticket to the spam or he's gonna pick something from the spam and then bring it to the flow and then respond from there, right? So the last two cases are where there is a feedback. So you predicted YP, but the true label was YT and the two did not match and all those cases will become contenders for feedback, clear? Let's go about it. Now, how do you go about incorporating the feedback, the practical side of it? So one possibility is that if you're getting a lot of feedbacks, collect them, make a mini-batch and do a mini-batch update. That's a very standard product, or active that you've done across, lot of models that are built in an industrial setting. But for a lot of our customers, the velocity at which the data was coming was not very high, which means they'll get not very high volume of inbound traffic. They'll respond to it and some of them will be wrong. So even to collect maybe a couple of feedback, it could take us a couple of days. And which means during all that time, the product experience is not so good because they would expect, if you have a feature like this from the mindset of the user's point of view, they are gonna say, I have given a feedback, why don't you learn from this, right? And we started to look at rather than mini-batch, can we look at tiny batches? Just a couple of examples and then see where it goes. So what we did is we modeled the feedback as a data point presented to the local model in an online setting. So most of the machine learning has this, lives in the world of what is called offline learning. You have all the data for the training available up front. You train it once, whatever the best model you get, from there on you just keep on predicting. There's another world where your data comes on the fly. There's an online streaming fashion. You don't have the luxury of storing the entire data in one place and then train from there. And then see how much can you learn from there. So that's called the online learning. So if you think carefully, a bunch of feedback points that are coming back to the local model are nothing but can be modeled as your data, which is coming as an incoming stream as an online fashion, right? And that is when we started to look towards what is called the online learning paradigm. So in an online learning paradigm, just to make this clear, you have a data which is modeled as a stream. The model looks at the data, whatever features you have built, and makes a prediction. And either the prediction is... So post that, you assume that the environment reviews the correct class of the prediction, of the data point. And then either your prediction is right or wrong. If it is right, you don't do anything. If it is wrong, you do some kind of an update to the model. So most online machine learning algorithms tend to operate in this paradigm. So that's when we started to look at couple of online algorithms. There are already a good number of candidates that are available. And this is the known performance based on some standard data sets. And we also did some experiments. One of the interesting one that we got was what is called passive aggressive. This was a model that came a couple of years back. The idea is, again, very similar. You take a data point, you predict, you get the correct label. So this is where the environment reveals. And either you are right or wrong, and depending on that, there is a loss to it. And then you update. Now, there are some variants available as to how do you update. But the typical idea is you take the weights at time t, you add some part to it, and then what you get is the weights at time t plus one, right? Clear? Now, when we did this, so we did some experiments to understand how well does it perform in practice. So we actually took a data set of 150k, and we arranged it in a chronological order. And we wanted to test whether feedback really improves the accuracy or not, right? So what we did is we trained the entire model in a batch fashion first on a two thirds of the data, which was 100k. And then we tested the model in one single go on the remaining 50k data set, and we got some number. So in this case, this was 75% accuracy. So this is the offline batch model. Then what we did is we tested the model in an online fashion. So what did we do? We took all the data points from 101, wherever is the number, till 150k, and went one point at a time in a sequential manner as they were arranged, okay? And every time it will make a mistake, we will feed the data back into the model and do the update according to what I just showed, right? So this is the primary idea, okay? So what we found is of the 50k data points on which we tested, it actually made a mistake over 9,028 of them. So for 9,028 points, it got right. So either a zero was predicted as one or vice versa, right? And each of these mistakes, so each of this mistake was fed back into the model, then and there. So I'll look at ith point, either I'm right or wrong. If I'm wrong, we're gonna update the weights and then look at the point i plus two, so on and so forth, right? And we saw that this actually gave us a far better accuracy. We actually saw a jump in this. But okay, so let me show what happens. So this was the batch that we got. So this is the batch number that we got. If I tested my model in one go, I'll get a single number, right? That's two. Now how do we calculate what is called? So what is called running accuracy? So at ith point, I make a prediction, either I'm right or wrong, and then I'm gonna do a update. So look at point from one to y. How many mistakes did you make in that? Now once you have that number, you'll always compute precision recall from there, right? So on i data points, you'll know how many of them did you get right, and then you can compute precision and recall. So that's the running number. So at any point of time, we will do this. And what we saw that this process helps us go back. This graph is a mirror of that, that's just the error. And what we are saying is that over a period of time, the error goes back. What we wanted, yeah, no. So you look at the i at data point, you make a prediction. If you are wrong, we'll consume this i at data point for the update, and then you will go for i plus one, okay? What we wanted to check is, is this really a fluke or something else is going on? How do you know? So what we also did is a test where we started to feed the opposite. So every time it was right, I will feed the data point back saying that you were wrong. Learn from this. And every time it was wrong, we will say, okay, you are good, let it go. And what we saw is, I don't know if I put that graph, okay, that graph is not there, but what we saw is that with time, this graph started to dip, and it actually started to go down, okay? So every time it will get it right, I will fool it by saying that you are actually wrong, learn, and vice versa, yeah? Okay, so coming back to the original setting that we had. So we had a global, we had a local. I have talked about how did we go about building or experimenting with the local. So what we do is, we actually took a score from both, and then combine the score from both global and local at runtime, and whatever prediction comes out of it, that is the final prediction, clear? And this is what we started to get from Ensembl, okay? The baseline has vanished here, okay? But it is somewhere here, 0.75, where it is. So with time, as we went on and on, it started to go up, clear? Like always, red is the mirror of that, so that's just the error rate. Am I clear? With what was the problem statement, how did we go about it? Okay, so for us, feedback is nothing but an online training, right? Online machine learning. In online machine learning, what you have, you get a data point, and you get a label, and then you do an update from there, okay? So going back, so we were using what is called passive aggressive too. So the original paper of Kramer has a couple of variations of this algorithm. So the weights of the model are nothing but the weights that you had at time t, and then there is a function that updates which is a fraction of the score that it has got from there. Yes, passive is aggressive with an algorithm, and it has multiple variants with it. We actually tried all of them, and for us what actually worked far better was what is called PA2, right? Yeah, so it is available in scikit-learn as well. Yeah, no, but they're asking so that I'm taking on the fly, okay? I'm fine, I mean, in case that's hinder, okay? So maybe I'll just finish and then try and come back, okay? So what we got out of this was, one, we had a better accuracy that we wanted to achieve. Second, starting from a vague or a generic definition of what is noise and what is actionable, we actually were able to achieve a personalization. We actually got to a point where we understood your definition, you as a brand, what it is. Local, so PA2 is computationally very, very fast, and the only thing that I have missed here is that how do we even bootstrap local, right? When a brand comes on board, how do we start a local? So what we do is, we actually take a copy of the global from there. The global here was taken as a logistic regression, and there were reasons for it. I won't go into that in the interrupt of time, but we basically copied the weights directly from the global into local. So now the cold start problem is gone. So initially both models are exactly same, but with time the local will evolve, the global will continue to be the way it is, okay? Life in machine learning is not, is always hunky-dory, and this is also the case with this. There were also issues that we started to run into, which was when you are presenting a couple of data points or a single data point or model, the model is likely to overfit on that and get a very biased judgment out of it. So all the online algorithms give you a lot of knob whereby you can control how aggressive you want them to be while learning on something, right? So even in passive aggressive by grammar, there are a couple of parameters. So we actually kept these parameters to be very low, but if you tune them to the higher end of the world, they'll do very well on the data points that you present, but then they might do very well on, or very badly on the data point that they would have seen in the past, or at least in the near past. Because of this, because your models can get biased and get biased very quickly, we had to do a lot of engineering around to monitor these systems continuously. So what we said is if our system goes down below the baseline and for us that was a single static model, then we're gonna say this model has gone corrupt, we're gonna create a fresh copy of the local and then start from there again. At the moment you do that, you have lost all the learnings of the local. There is no way to retain what it had learned so far. So which means it will start from scratch again, which means at that point the data which is about to come after this step, it's gonna do badly there, but hopefully it will pick up from there. So this is what I meant by we reset the local as and when it becomes biased or corrupt. There were a lot of ideas that we wanted to try, but yeah, being in a tough setting, we could not go about all of them, but one of the things we also observed is in this particular problem statement, having a single global that works for all, is not such a great idea. Instead, if you can do, if you can divide your customers vertical wise, because there are idiosyncrasies which are common to verticals, right, you won't see one e-commerce company behaving very different from another e-commerce company. You will not see an airline behaving very different from another airline. So if you can build the global at a vertical basis, at a vertical level, then you're likely to have a better judgment from a global model. There were a couple of other online algorithm, but some of them had certain issues which you could not try. So we had to live with whatever we had there. Another concept that we really did not handle in its full glory is what is called a drift. So drift is when your underlying distribution has changed completely, which means whatever the model would have learned until now, it will start to do badly, right? So one, you need to detect, and then you need to correct. And that has a lot of engineering challenges, so we did not go down this path. Another idea that we wanted to look at was, right now, every time I make a mistake, I'm incorporating them. Is there a way to get to a notion whereby you can say, this feedback is more important as compared to others? Let's not take all of them, but let's take this. How do we even come to a point where you start to say, this feedback is more important than that? And that would also require a lot of thinking around the product and product interventions, because at the end of the day, you need this in almost real time, if not real time, right? Otherwise, there is no point. And if you need in this real time, the judgment has to come from your users, which means you'll have to build product interventions, which your users might not necessarily like. So these are a couple of references in the work that we used. Most of them are around online algorithm that we played with and experimented with. Thank you so much for your time. I'll take questions. Sure. Now, there are various ways to do that. You can take the judgment from both and build an ensemble on that, whereby you can treat that as a feature, or you can take the probability that is predicted by both, and then normalize the two, add the two and normalize, and then build a judgment from there, or maybe you can even go a step back, whereby you take the logits, which are coming out of it, and not the probability. Combine the two logits, and then work from there on. We actually did a lot of experiments around waiting these judgments that are coming, combining ensemble. None of them really worked well, and to be honest, to begin with, we wanted to take the simplest approach. So we said, let's take the probability from one, we take from the other, add the two, and then build a judgment on that as a combined probability. Check. So recommendation systems have different way of building. The one is the personalization part, and that's basically once you get, yes, so typically the idea is if you want to build a personalization, you either predict a fresh, or whatever comes out as an output, you do a filtering on top of it, and the filtering varies from user to user, by taking into preferences. So typically what we've done in recommendation system is the second. Good talk, awesome talk, thanks. One of the better ones today. And one suggestion that I had was that when you predict, let's say it's a spam, some percentage of that you can still send to the customer support guy, so you get the negative feedback as well. Exactly, so what we actually do while building this whole system is, we said we will keep the recall high, which means most of them should end up in the flow. Even if you make a mistake, an agent is gonna come close it and finish off from there. But if you do the other mistake, so there was something that should have been responded to, and you actually sent it a spam, and it is lying there, so we built a flow whereby whatever went into spam for last two hours is reviewed and then moved back. That makes sense. But then there will be a delay and there is a cost. Because it is social, you also have a scenario whereby it came from a VIP, somebody who is very important on social, it is lying there for two hours, what do you do? So we built some other product in interventions to filter out such things, but I'm not going into that details. No, so we are not retaining from scratch. So that goes to the philosophy of online updation. Whatever has to be learned has to be learned from this. In our case, we went to the extent of taking a couple of points, five or 10. But that also varied depending on the traffic of the brand because that mattered. So you're not retaining from scratch, you just do an update, and all online algorithms are typically very fast in that step. So the computational cost on the fly is very, very low. Hey. Yeah, so you mentioned that whenever bias is introduced- I'm sorry, can you raise your hand? When you see that there's a bias introduced, you kind of retrain the model again, and for the next coming input, because of that, you might have a bad result. So did you consider undoing only that biased? Yeah, yeah. So what actually we also experimented is where we were taking snapshots of the model, and we would do a rollback. Now a rollback is nothing but rolling it back to the weights that it had at particular time, right? But then the question over there was, when do you take snapshots? You can't keep on taking every second, right? So you'll have to set some kind of a criteria, whereby, okay, the model is really good at this point of time. Let's take a snapshot and save it for later, save it for later, save it for later. We actually did not get, I mean, a lot of this is also driven from the product and your users. Like how do you get these signals? Okay, now it's doing really well. Take a snapshot, take a snapshot. You got it, thanks. All right guys, you're out of time. If you have any questions, you can reach out to our job line.