 a new way of looking at things that we in the performance team love. So, Dario. Okay. Thank you. Thanks very much. Thanks for having me here. I'm really thrilled. So, what I'm going to do is that I'm going to bring basically two viewpoints. I've been working on academia for the last 15 years or so, and then last year I moved to Huawei. So, I basically have industrial viewpoints with university mindset. So, today what they're going to do is that I'm going to talk about metrics and models for web, and there's a little longer subtitle that I'm not going to comment here. We're going to see throughout things of all. So, of course, this work would be not possible with number of people, if they are in alphabetical order. Two are actually in the room. One is Gila, the book from Wikimedia Foundation. The other is Flavia from Telecom Polytech. So, thanks to them, we also can discuss more interesting thing now. So, just to set about what we are focusing on, I mean, I'm not a web developer, so I'm going to have a completely different focus, and right now I'm working on equipment vendor, so I will have a very much lower layer focus. So, no matter what you're working on, if you're a browser maker, if you're a CSP content service provider, if you're an internet service provider or an equipment vendor, what you care about is that the user are happy, right? So, offer quality of experience is a common goal. Of course, if something goes bad, you want to be able to detect it fast, if possible, if you want to be able to forecast before things go bad, and if you're good at forecasting, you could also try to prevent things from going bad, in order for your user not to churn. So, detecting quality of the gradation, quality of experience, the gradation is important. Now, how do you detect quality of experience? How do you define it? Well, typically we need to have a good idea of how the user, if they are happy or not, and then try to correlate some of the telemetry. So, like for instance, the boomerang is collecting a lot of telemetry, and we'll try to correlate that with the user quality of experience. Now, if you're taking the point of view of equipment vendor of internet service provider, well, you're gonna have a little bit harder time because you're not in the browser. So, you don't have all the rich telemetry, and encryption now is really gonna be painful, because you're only gonna see stream of encrypted traffic. Still, we want to do something, because otherwise your user will churn, if the user will churn, the equipment vendor will not be able to sell equipment, and so there's a loss of money as well. So, it's important to get a hand on what's the quality of experience, and user quality of experience is basically affected by a lot of things, including, for instance, the context, so where is the user, work, other places. If he's a pessimist guy, or if it's an old lady, probably they don't have in excess only the same perception of delay. And there's, of course, system influence factor. If you are down-signing the building, ground two, or minus two level floor, probably your signal is not very good, so we have slow performance. So, in order to factor all those five, be an engineer, of course you're gonna ask the user, but you're gonna try to infer these things from looking from the system perspective. So, system perspective starts from the lower layer, the network, so over there you will be able to measure some quality of service indication. These will in turn affect application performance, application metrics, application QoS metrics, like the one that Boomerang was reporting, or other, like web page test, are reporting to use some telemetry. And from that, it will have an influence on the way in which the user are experiencing the browsing behavior. And so, what you're gonna do is that you're gonna be able to measure some of these metrics, like from an end-to-end view point, what is the latency, what is the bandwidth, what is the pocket loss, or point-to-point, what is the Wi-Fi quality. Of course, that doesn't make sense if you look at the throughput of a single connection because you want to put them all together in order to be able to tell meaningful metrics from a session view point. Session means, for instance, if you're looking at a web application, it's gonna be a page load time or speed index that we're gonna see later. There are also session metrics that are correlating measures about multiple sessions. So, for instance, engagements, so measuring if you're staying on the website for long means that you typically would are happy with the quality of experience they're serving. And of course, you can go and readily ask the user how it feels about the service you're giving him. So that they can ask many users, poll around the room, you get five stars, and then you do the average, it's your opinion score, you can also ask a different thing. And of course, if you know about the device type, if you have a cheap phone or if you have a high-end phone, maybe your expectations are different, maybe the phones are also rendering differently. So, all of that, of course, is very complex. So today, we're gonna focus on a subset of it. In particular, we're gonna look at the web, that's the web dev room. So, we're gonna look into performance metrics page load time, speed index, try to see how this correlates with the mean opinion score, your user, other user feedback. And of course, we're also gonna adopt the viewpoints of the lower-layer carriers, where they're only gonna be able to measure some weak signals. They don't see anything about the middle layer because it's quick, HTTPS or whatever, all that kind of encryption. And so, they either want to try to, from the network QoS, you learn something about the application QoS or make a big step and go to the quality of experience of the user. So, that's basically the agenda for today. So, they're gonna delve into four different aspects, data collection, so the modeling parts, the metric parts, and then again, some method that allows you to go from row to a pop. So, we have a pot. If you're from the network, you need to start with your method, learning something which is metrics about the browser can easily measure. You need to learn the metrics that are useful. So, for doing that, you need to couple two things. You need to couple measurement involving the user, asking user whether they're happy or not, and building models that, based on your metric, are hopefully able to extract this information from automatically collected one. So, in the agenda today, we're gonna work this top down, so we're gonna start with the data collection. So, data collection, typically, what you do is that you build up some crowdsourcing campaign. They have a huge cost, and they are no perfect campaign. In the last years, we have been doing three type of different things. We've been asking user, what is the mean opinion score? So, write your experience from one to five. We've also been asking user, when do you think that the page was finished, or what is your user-perceived page load time? Or, seeing two pages at the same time, which page did you think it finished at first? So, to get a little bit of an idea of how the user are perceived in the web. And finally, with Wikipedia, living collaboration with Wikipedia, we started asking the user whether they are satisfied with the experience they have while browsing Wikipedia. So, of course, there's no personal solution. In the first data set, we were doing lab experiments. This means that we were having a few panel of people that were typically volunteer, close to 150 to 150 people, recruited universities. So, you have very specific class of population. It will definitely not fit the grammar's behavior. The good side is that we were using real servers, real protocols, we were able to control the conditions, but the number of web pages, of course, is not as completely representative of the internet. So, then you can do something else, stepping up by moving to crowd sourcing. So, you have, for instance, Amazon Medical Turk, Mechanical Turk, so you can leverage a large pool of people. But over there, you need to, you cannot let them access a web server, so you will typically put videos of the web page rendering process. So, this is not really exactly like browsing. You reach a larger audience, but these are also interested in getting paid for the task. So, you need to filter out a lot of people that are just there to make money. So, last thing that we did with Wikipedia is very interesting because we are actually, we are polling the user, so there's one billion pages visit monthly, roughly, and a tiny fraction of that is gonna be polled for performance metric, and a tiny fraction of that is gonna also be polled for binary feedback. It's likely more than binary feedback about whether they were happy or not. So, this is good because you're gonna poll users that are in the real service, from the service they like, the service they use typically. The downside is that you have a huge heterogeneity. Remember, on top of my head, that we were polling on 65,000 people. They were looking at 42,000 different Wikipedia pages, 3,000 networks of 250 devices, and 45 rosers. So, there's a lot of heterogeneity, and so building a single model is not necessarily trivial. When I'm putting the icon there is that the data set are available, so if people are interested, if there are people that are doing research on that, like we were saying before, sharing tools is important, sharing performance evaluation is important. Sharing the data is even more important because it allows you to replicate and see whether the performance that are reported are true or not. So, now that you got the data, okay, cool, what do we do? Well, basically, we're gonna have a way to go from the data, so our Y, to find some function that based on some of the things that we are able to measure, like our Incognita Y, plug into a formula F, it's gonna be able to tell us what is magically if you want the user performance. So, here, by each, typically, people use a single scalar metric to generate the page load time. The function has been predetermined by an expert, and there are typically two approaches that are being used. One is IQ X hypothesis we are using on an exponential model, and here, with a logarithmic model, which is tied to the Weber-Fetchener law, which is a psycho-behavioral model that tells that the human response to a stimulus is logarithmic-related. And this is, for instance, used by a standard, so what you do is you do a lot of measurement, all the points here are different answer from the different user, and then you do a fitting, and here, the fitting, we can be happy with that. Now, there are limits because, typically, there are a lot of metrics, although telemetry that is made from browsers, and so here we are only using a single metric, so you can go one step further, and instead of picking a single metric that you like and a single function that you like, and although the fitting seems nice, you could do something which is machine learning driven. So, basically, having a factor of input features and having an automated way to select what is the optimal fitting of the function by minimizing some error. So, here, the trick is that whenever you select a very specific machine learning algorithm, you're implicitly selecting which are the type of function that you will be able to learn, and here you see that you have a slight gain with respect to the typical models that you have here by considering more metrics. Of course, there are different models that are available. We're not gonna delve in the detail of that, just to say that for me, there's still some room for improvement from going to the feature that we have to the user experience, but still you have a good and quite high correlation. So, this brings us to the metric. What are the metrics that we can work on? So, in order to be quite clear about everything, we're gonna have a very small animation about how is the web page loading process after you go and click on a link. So, we start something that you're gonna start downloading, and at some point, you will have an event that is gonna be fired by the browser, document object model. So, at this point, you know the structural page, and you can start putting things around, and so you have a visual progress of the page that increases from zero to upward. Then you keep downloading more things until at some point, which is called typically above the fold, all the visible portion of the page has been downloaded and shown to the screen. That's called the ATF, and your visual progress is increasing. And you can represent here your visual progress as a function X of t that is growing from zero to one, where one is basically everything that needed to be rendered for the page to be visually complete is finished. So, X of t, of course, you can also do something a little bit more fancy. So, basically here, the integral of the residual of this function is the gray shaded area above the curve, and this gray shaded area above the curve is what Google defined as speed index. So, we're gonna come into that in a moment. And then, of course, I mean, you can keep downloading more content that is not necessarily available and immediately visible, but it's gonna be available when you scroll. And that's when all the content is loaded, it's typically the page load time. So, now we have two type of metrics. So, one are the time instant metrics. So, you have, for instance, time to the first byte, DOM, time to the first paint, about default page load time. These are very specific time, which are important to somebody. And then you have something else, which is the integral form of it, which is basically looking at all the area above the curve. So, why this thing intuitively is important, imagine that you have two realizations of two pages that have exactly the same page load time. So, they finish exactly the same time, but this one shows half of the content very fast, and this one shows half of the content almost much more later, right? So, in which of the two you would be happier in this one. So, whenever the area above the curve is smaller, then it's better, and it's faster. So, one additional comment is that, given that you are integrating something that is a dimensional, and integrating over time, also the area above the curve is a time in dimension. So, physically, if you are an engineer, you would think that of a time, is a time unit of measure, and you can think it as a virtual time that is explicitating how fast was the rendering process. Now, you can define a family of metric like this, and depending on what you put is x of t, you're gonna have the speed index. If you're looking, for instance, at the difference in the histogram that were shown, so the colors on the page, you have room speed index that is measuring the areas that each of the different objects that are drawn on the page are gonna put, and they're gonna compare it with the amount of rectangular should have been drawn at the end. You can look at SSEM, PSSI, perceptual speed index using SSEM metric, which are much more advanced. So, all of that is very good because is visual progress, but they're out on site. So, for instance, you can only measure them in browsers, and some of them are actually processing intensive. So, if you need to do SSEM, there's a lot of computation you need to do. So, some years ago, we were proposing to do, as a proxy of this more advanced metric, they were very simple inputs, like object index or bytes. Just looking at the bytes that are coming, you would get a pretty decent idea of what is coming to your browser if it's coming fast or not. We're gonna see a little bit later if it's work or not. Good side is that you can do it in layer three in the network, is correlated with speed index, doesn't necessarily is good for creative experience. So, that's a question that you need to address. And I'm not gonna go into these kind of details, but you can also have affecting, for instance, the cut of all the integral in order to optimize some of those metrics. But I'm not gonna go into these details. So, now, if you are in the browser or if you are a content service provider, what you have is that you have a pretty good picture of everything that is happening. You have per domain, the vision of all the different objects, also the type, if they are images or not, CSS, whatever. And you can reconstruct this feature with quite accuracy. Now, if you are in the dark, so if you are an ASP, if you are an equipment vendor, what you will see is basically a series of packets coming from different flows. And the only thing that you're gonna read is that, okay, this is a packet, this is a packet full packet size, MTU, and this is a smaller one. So, what do you make out of it in order to extrapolate from this? So, again, I'm not gonna go into a lot of details. I'll rather gonna show you why this thing would work, but basically the idea is if you are familiar with machine learning, you need to perform some amount, some really simple amount of signal processing in order to make your input to be homogeneous. We are using supervised technique. So, supervised technique means that we need to have exactly the same input. And then, different models that you are using, like extremum gradient boosting, which is an assembled method based on trees or 1D convolutional neural network, what we do is that we present them with a lot of samples. And we tell, look, this sample, and we also explain them, for instance, had this above the fold value. We build another model, providing the same example, and providing what is the page note time or the speed index of any metric that you are interested, and we provide many samples to train a model, and we test it over previously unseen cases. To give an intuition why this should work, so here we have the webpage rendering, so this is basically the user. Here is what we see in the browser, where every burst is gonna be one object and we have one color per different domain. Actually, we're presenting only the top three domains and the others we are using the same color, otherwise the picture would be really, really too colored. And here is what you see from the network. So we're gonna have one packet, a little bit more, we are aggregating packets in 10 milliseconds, and then you're gonna see one color per IP server. So when I'm starting, if I click on the right place, you see that, okay, now this is a Chinese webpage, so it starts late, at some point, you see those fingers are progressing, here there was a big object, this big object has been a lot in multiple packets, same thing here, the green packets correspond to this big object, and you see that these curves are slightly different, but you see that there is some similarity, right? They are not completely different. And indeed, if you systematically perform this experiment, this was just one example to show you how these things look like for real, then you can go and make an experiment where you're monitoring the network, so you're taking the real encrypted traffic, you are monitoring the browser, so you have the ground truth, so you have the above the fold, whatever metric you are interested in, and you can repeat this process and try to see extrapolate some accuracy number. So here is the only accuracy picture that I'm gonna show. This is reporting the absolute error in milliseconds, actually this is the median, and this is the 23th percentile, and this is the 75 percentile, so this is basically, in the 75 per case, your error is gonna be much lower than this, and in the median case, it's gonna be this one. And you can see here we have two different approach, one is, even without machine learning, I'm not gonna explain why, the colors before in the picture had a mathematical interpretation, but I didn't want to bring it up today, it's not a point, but with an algorithm based on that, we can have already something that is gonna learn only a single function, which is the byte index, and we can approximate the byte index learned from the network with the application byte index that we learned in the browser. And that one has a six percent error. On top of that, this is without machine learning, so it's a very simple online algorithm. On top of that, you can add machine learning and you can compensate for this error, so you can reach a lower error, and then you can learn, generalize to any metric. So we're learning the page load time, the object index, the speed index, or room speed index, the DOM, if you're interested in learning the DOM, with these kind of errors. So we did test with orange on a number of pages that we were never seen before, in a number of networks we were not seen before, and these are the accuracy estimated, indeed, in those settings, so it's a pretty good portable. And, okay, not to make an advertisement, but given that the algorithm works, we are also porting it into Huawei products. Now, there was one catch that I didn't talk in this talk, due to lack of time, is that we are also able to handle multi-session. So if I go back here, we see that there are a lot of packets coming from a lot of different flows, but you need before to be able to isolate the flows that are gonna go to the same session. So this is something that you need in order to be able to apply your machine learning technique, and this is also something that is done, but we just didn't talk for lack of time now. So basically after, okay, now it's where we stand, so where we could go to go further. So I'm gonna just talk about three couple of free ideas. So for people that are familiar with machine learning, unfortunately, in the web QoE domain, we are still at expert-driven feature engineers. So basically we have somebody that is defining speed index and why should we define speed index? Seems are very natural and very bright idea, but we have no clue whether this is really a proxy for quality of experience. So a better approach, I'm not saying more explainable, so it's less intuitive, would take raw input, raw sensory data from the user and try to do what? To learn the features by the learning process. The learning process is gonna, in the neural network through bake propagation, is gonna create some features that are the most relevant in order to find and explain why the user voted a given score, okay? So that's definitely not interpretable, it's more versatile. The downside is it requires a lot of sample, okay? So here what we did was taking packets and learning any of this function. Similarly, we could use these inputs and trying to learn functions which are user happiness. Of course, getting the data is difficult because you would want to be as less intrusive as possible, so if you need to put sensors like this, maybe you're affecting the user experience. And other thing is that maybe, okay, you can leverage, so I know people that are working on happiness recognition through future recognition, but over there if you're happy, it may be for the content of the message that you receive or the page that you're visiting and not whether the experience in your loading that page was happy, was good. So it's quite difficult to get the sensory part working. Second thing is that I was speaking about single model and actually we did single models because they are easy deployment, but of course, the wide web web is really, really large. So for instance, Wikipedia is not image-intensive and you will have other websites that are mostly done by images or video. So all kind of single model fits all. So of course, to increase accuracy, you should go per page. Here just an example picture where you have black line is one average model and these are all the points that you're getting and of course if you have many per page model, they're gonna have a better fit. Now the problem is that inherently this process is not scalable. So how to make it scalable? Well by prioritizing things that are more important. For instance, if you have top 100 web page, you can build reliably models for the top 100 pages that are more frequently visited by people. Then you can have a second approach in which we cluster the top 1 million web pages. For instance, here you see number of clusters out of which 24 pages were extracted and inside each of these clusters, there are thousands of pages. This cluster have similarity in term of the number of domains, the number of objects and the size of the page. So there are higher chances that if you build models that are accurate for pages in this class, then you're gonna also be able to cover more accurately the top 1 million. And then of course, okay, for the rest, the top 1 billion pages, you're gonna use a single average model and pray it will work. But at least you're gonna already have in a better operational point in the accuracy versus scalability trade-off. Then final comment, which is a community comment. If you are working in this space, the first thing you need is data. So keep collecting and sharing data is very important. So I'm very happy that finally, we're working with Wikimedia, we were able to release a data set in a properly anonymized form that was protecting successfully the privacy of people and also letting people doing the research in order to build models better than the one we built. So you need to take into account that when you go to the supermarket, you already find this machine, right? And they're asking you, are you happy or not? And you click on it and you don't think even about it. When you're calling over Skype or Facebook at the end of a call, there's something that is calling, asking you, write your call. Also my phone started asking me, did you find this suggestion useful? So to have binary feedback from him and this is them from Wikimedia. So what you would you gain for keeping this data collection is two things. One, you have, until your model will not be good, you really have some information from the user. So you already know if something happens that is users of your service that are telling you directly and you don't need to go over Twitter and try to understand if the user are complaining about your service through other channel. Second, this continuous stream of data is gonna also be able to make your model better or if there's anything that changes, next protocol. So we had the HTTP tool and now it's gonna be HTTP-free sooner over quick. Maybe your model need to be retrained. So you will need to have this kind of data. And if the user population is large enough they're also limited on site. Only it is a risk of annoying users if you are having a small, if you leverage small panels. So without, so this is basically a talk that is based on these resources here. I put the different papers. They put also the icons for the different data set. Some of the implementation that we release and everything is accessible from here in this page. There are things that are not out yet. So more will come. So for with all this, I think that I'm done. So I would like to thank you for listening so far. And if you have any question, please go ahead. If you shout, I can also repeat the question. So like, yeah, okay. So there are studies also. So the question is, what if I'm able to break the key? So why are you doing things about the encrypted stuff where you can break the encryption? So government guys, so there are two answers. So if you are having the, if you're able to decrypt probably you're not interested in web performance. So you're breaking this in order to look at different information. Second, there was a study using X telling you what is the fraction of in data. You got proxies, for instance. In some of the institutions, you have a proxy and you're delegating and you're accepting the key and in a PC that is managed by your organization, indeed, you have a proxy so for which it is not necessarily useful. Now in GDPR, this is pretty serious now. So definitely if you are Huawei, there is gonna be twice as more concern as if you are a regular vendor right now in this moment. And so of course, I mean, having that your devices are totally not interested in looking in the payload because they don't need it, okay? It's much more important. So here basically what we are doing is that we are leveraging very weak signals that are intrinsically in the timing information that comes in packets. Much like, I mean Debussy was saying that the music is the silence between the notes, right? So here in somehow, we are waiting the information that we see even if you are not listening to the notes, not looking at the content to try to reconstruct the signals. The thing that I was showing today was not for the government, was more for internet service provider and well, equipment vendor, but if you go up to the Chrome browser, for instance, they still, the missing link is between the layer seven and the layer eight, the user. So how do you, can you ensure that, for instance, there will be a talk on normalization of timing APIs? So you want to normalize something that is relevant for the user, right? So, and this is the part where indeed it fits from going normalizing that from level seven point of view. And if it's relevant, we can learn it also from layer three without breaking an encryption key. Don't know if it clarifies or, yeah. Okay, so that's a very, yeah. So that's a very good question. So actually it's seasonality, so basically things that are non-stationary over time and particular seasonality means that there's periodicity is something that we look out, we extensively look out in the data sets. For instance, with Wikipedia, we have measurement of months worth of studies. So we were expecting to find day, night effect, weekend, weekdays effect. We didn't find any about the happiness of the user. It was amazingly stationary during the period. So this is documented in WWP and we also extended that I don't know what this non-seasonality. We were expecting it. The data now is available, so, yeah. I think that is much, yeah. Mostly the size difference or the difference in timing between packets? Okay, so that's basically, I went very, very fast. Can you repeat the question? Yeah, I'm gonna repeat the question when I come to here. So basically the question is, okay, what is the magic? How can you learn from the packets? And actually, we're not learning directly from the packets because every webpage has a different number of packets and our supervised method, which are regression method, needs to have a fixed input. So what we are doing is basically that we are chopping the time into a regular interval of times and what happens is that basically you are sampling periodically a signal. You're sampling periodically this signal when you are every SOF and every delta T, you are looking at the packets that come, put an integral there and you're basically sampling this curve here, right? So this is the way in which we get the input, which is by just accumulating over a small period of fixed time arrivals, packets belong to the same session. And that's what makes the input. So there's the basic signal processing level amount of feature engineering to normalize your input to be able to fade it to a neural network. And okay, five minutes left, so I was too fast. So you can, sorry guys. You have a question? Okay. Cool, I can ask directly in the microphone. Do you have an estimate of how many data points we collected on Wikipedia in like a typical week during the study? How many data points we collected in a typical week during the study on Wikipedia? You are in a typical week. So now I know that basically finger change a bit. So I think I have some backup slides. So too many backup slides. So this is okay, about the stationarity, you get your picture there, which is here. So I know that we were collecting the 62,000 data points during the first period, which was basically a first test case in which we were, so if I remember correctly, web performance timing are triggered once every 10,000 page visit in Wikipedia. And out of those, we were sampling one over 1,000 at the beginning. At the end of that, we step up a little bit. So you step up a little bit the sampling, but this is basically over this time period of time. So hidden is the fact that we basically issued the sample, the query to 1.4 million people and only 62K replied. So because people are, they can willingly or not, except to click on those or not. So the numbers per week, I don't have them in my mind because you are mostly focusing on, can we get a breakdown of how the user happy? And in this case, in Wikipedia, 85% of the users are consistently happy with no seasonality and no correlation with surveillance. Okay, okay, so we can go back, this is slide one, which is, so you're here, you want to know if things break or not. Right, so if you are measuring indeed from the browser, it's because you are in the browser or because you're a service provider. Now, what Huawei do as a business is basically selling boxes. To operators and operators, what they do is that they sell pipe capacity to their customer, which are the user and from time to time, they have problems because the service doesn't work and the people will complain to the ASP, but actually it's not the ASP, the problem. Maybe the content service provider, maybe the DNS, maybe the BGP. So over there, basically, there's a need for troubleshooting tools in order to be able to tell, oh yeah, it's our problem. So it's our network that is down, so we're gonna fix it or look, guys, everything that we have on our side is okay, but there are a lot of problems on that website everywhere. In order to be able to say so, you need to know what is the typical page load time of your user or detecting whether this is changing. So this is why, indeed, before I was working more on the, if you want layer seven kind of aspects, and there the question was, okay, we have the speed index, okay, we have the above the fold, but nobody tried to compare whether this was really relevant for the user. So this is where we started involving users. And now this bit about, okay, and then I'm working for an equipment vendor, so am I able to do the same things, but from a more challenging viewpoint, which is starting from completing crypto traffic. Just for, I mean, this is research, so it's fun. But then, given that I'm no longer in university and I'm in, well, there's also business model behind, because basically if you are able to detect whether there is a problem, then you can fix it, and then it will not have user churn and so you're not losing money, right? So the same thing for the content provider, why they are optimizing? Because there are ads, except on Wikipedia, so there are, in Wikipedia there's a donation, but if you are Google, if you are being from Microsoft or Facebook, you're showing ads. And this is the way you get money. So if your webpage is low, they were studied by Google, by Bing, they were showing that for any amount of milliseconds you add from hundreds milliseconds, you have a loss in the number of people that are gonna go to the server, click on the ads, and so you have a losses of revenue. And if you multiply 2% loss by 1.2 billion people visiting, that's big numbers. So same thing, but from encrypted pipe from the network guys, few points. Okay, so thanks a lot for, thank you very much.