 All right. Hey, everybody. I'm going to use the remainder of my voice to tell you about Houdini. So if I go full adolescent, cranky voice on you, I apologize in advance. But I've done a lot of talking in the last few days. But I'm very excited that a lot of people came here to listen to me talking about Houdini, which I want to talk to you about. So this is about Houdini. And the subtitle that you can read here is the title of an article, The Smashing Magazine published, which says, it may be the most exciting development in CSS. And actually, this article was written by a colleague of mine in DevRel. And the subtitle goes even longer saying, maybe the most exciting development in CSS that you've never heard about. Because so far, the entire Houdini effort has been kind of flying another radar. So Philip Walton, who wrote this article, joined forces with Shane Stevens and Ian Kilpatrick, who are Chrome engineers who are leading the effort in Chrome to work on Houdini. So this article is not only really good, it is also really in depth. So if you want to have more reading after this talk, I encourage you to go there and read that. It's just a good article. So before I start getting into the whole gory techy details of how Houdini works, I just want to make sure that all of you know what Houdini is and where it is trying to go. Houdini is the effort of exposing the internals of the CSS engine to you as a developer. And that sounds a little bit ominous, so I'm going to explain that a little more. And afterwards, I'm going to talk a little bit more about the individual parts of Houdini, where they are at right now, where we're going, and all these kind of things. So actually, I wrote a blog post too, even before Smashing Magazine. So I wrote about Houdini before it was cool. And I did that because when I started on the Houdini project, which was back in January of 2016, there really was no publicly available and easily consumable information about Houdini. There was a website which was a glorified wiki, really. And it just linked to the specs. And the specs were empty, or they were like, you know, specs just very hard to read and consume and to understand what the motivation was. And so I had the privilege to just go to my Google colleagues, the actual engineers working on this, and have them explain to me why this exists. And so I wrote the blog post, and I did the whole thing. I put it up on the Twitter's and the internet's. And I had a lot of feedback, and it was really interesting because loads of people were just mind blown by what Houdini was trying to do. And one person was Dan Tukini, who is the inventor of GSS, the Grid Style Sheets, a constraint-based layout system for the web, and is now the CEO of the Grid. So he said, Houdini sounds like the best thing since sliced bread. And of course, you would say that because both of his projects are really deeply invested into CSS and the more powerful CSS gets, the better his projects become. But he was not the only one. So effectively, he was mind blown, and I was asking myself, is it really that mind blowing? Why does it feel that way? Because to me, certainly it did. I was blown away when I heard first about it. And what it comes down to, I think, is that you are getting access to browser internals. You now have things at your disposal once Houdini lands that weren't at your disposal before. It feels very raw and fragile and actually kind of secret because usually it's reserved for browser engineers and not the web developers. So you now have the tools of the gods at your disposals. And that makes things really, really interesting. And the thing is, this has actually happened before. There were two technologies that were pretty much the same level of mind blowing. And Polymer made heavy use of it, and it made Polymer so good, but people were crediting Polymer instead of the web platform. And the things I'm actually talking about are shadow DOM and custom elements. These are web platform standards that supposedly are supposed to end up being implemented by every browser. And Polymer sits on top of that and they're giving you access to a browser internals. For example, shadow DOM. Look at the markup right here, the video element. If you think about it, where do these elements come from? Nobody in the markup do I ever specify the play button or the progress bar. They just appear. And the subsequent question is, how do I style these magically appearing elements if I never put them in the markup? And since shadow DOM, we actually have an answer to this question, because now we can actually, in the inspector, we can open up the video element and see that the play button is just an input type button. And we can maybe even style it. And this has always been this way. Browsers have always used their own elements internally. If you want to have these kind of hidden control elements, but you were never able to do that yourself, because shadow DOM wasn't a thing, and now you can. But again, people thought this is just Polymer emulating something that looks like a browser internal, but that's not the case. It's a platform feature. This is actually how the browsers work. So again, both shadow DOM and custom elements are specs that's supposed to expose something to you that the browser already had internally and now it is at your disposal. And it enables you to recreate behavior that was already there before, but you wouldn't have been able to do this, like these magically appearing elements. You wouldn't be able to do this before. And this kind of mind-blowingness has a name. It's the Extensible Web Manifesto. And the key quote for me from that manifesto is, the web platform should expose low-level capabilities that explain existing features and allow authors, so you, to understand and replicate them. And again, think about how fitting this is for custom elements and shadow DOM, exposing things that were there before, but weren't at your disposal. And of course, it goes further. It not only allows you to understand things or to replicate things, but you combine all these low-level capabilities into something potentially new and mind-blowing. And when you set this as your goal, that you want to expose all these low-level capabilities that looked like black boxes before, and you look at the current state of the web, you find a lot of things where you say, this could be exposed, and this could be exposed. And one of these glaring shortcomings is CSS. And that's why Houdini exists. Custom elements for the DOM tackle the DOM, turn the black box DOM into something more open. Houdini is trying to do the same thing with CSS. And people keep asking, why does it call Houdini? And nobody knows. I actually asked them last week, the task force. And they had two stories, and none of them knew which one is true. So I'm going to tell both of you. The first story is that CSS is magic. It's just another word for a black box. It's really magic. Where does it say what display flex even does? You just change from display block to display flex, and suddenly everything rearranges. How does it happen? Where is it said how border style dashed is supposed to be drawn exactly? Houdini is trying to demystify that. The other story is that CSS is a straight jacket, and we want to escape that straight jacket. And the goal is whatever the origin of the name is, the goal is the same. We want to expose all this to you, so you can actually use these low-level features to build more and greater experiences on the web. So we need to expose all these low-level APIs of the CSS engine. And let me read this off. This is not a Chrome brainchild. This is actually, and I want to read this because I don't want to forget anyone, Microsoft Edge, Mozilla Firefox, Apple's Weapon Safari, Opera, HP, IBM, Intel, Adobe, and LG are all participating together in this workforce to make this thing happen. So it's not like one browser supports and nobody else does, but this is a web platform feature of the future. Everybody else is working on this together to be on the same page and work towards the same goal. So let's talk tech. The first off, Houdini is not a single API. It is a collection of APIs. And I made this graphic, and this is not even complete. I skipped over a few of the smaller specs or the more basic specs that the other specs work on top of because they were kind of boring to explain by themselves. But these are the ones I'm going to focus on. And the grouping of these specs is my personal emphasis. I grouped them kind of by synergy because these specs that are close together work very well together, but there's no limitation on how you should use them. What you should keep in mind, though, is that a lot of these specs are still empty or just mere placeholders for an idea. They need to be fleshed out and specced out and implemented and triaged. So for this talk, I'm going to focus mostly on the least immature ones, which are arguably these four. Some of the other ones are not even drafted or just an idea. I'm going to explain all of these on the slides at least roughly, but the four highlighted ones are the ones I want to take a closer look at. So let's take a look at the very first cluster. These two APIs have the goal of introducing typing into the JavaScript APIs for CSS. Currently, typing is in a really weird spot for CSS, which I think I can explain best by an example because currently, when you want to manipulate styles from JavaScript, things start to hurt a little bit. So you probably have all written something like this before. You just have your variables, and you want to turn them into styles. So you do a lot of string concatenation, and you append units. And only thing you want to do is just move something by changing its transform. And this is not only unreadable. It's actually kind of ridiculous when you think about it because we have numbers, and we turn them into strings to pass them to the CSS engine so that CSS engine, in turn, can destruct that string again, turn it into numbers, and do its thing under the hood. So we are really wasteful here with computational styles. ES2015 made things a little bit better in terms of readability because we have template strings now, but it's still strings. So the wastefulness is still there, and it's just not a good idea. Things get actually even more ridiculous when you think about it when you try to read back styles from the CSS engine and use them. So you have to strip off the pixel suffix yourself, or if you want to work with timing functions or CSS matrices, good luck. And this is where the type object model comes into play. It exposes a typed API for CSS. I say typed, it's still JavaScript, but at least now we have units and values separated. And what I mean by that is something like this. The new API is mostly contained in a style map on DOM nodes. And with that object, you can set styles and you can query starts. When you set styles, you give it the property you want to set, and you use one of the new types that they have introduced for number values, keyword values, or length values, which can be pixels, or EMs, or VHs, all these things that you have in CSS. Obviously, you can also query something. So when you set something, you get something back. And the mind-blowing part here is now you have a dictionary which has a value and a type, so you don't have to do the string parsing yourself. Great accomplishment. This thing goes also further because it also handles calc values. So if I set a calc value, I will get a dictionary back that has an entry in a dictionary for each of the units that are in that calculation. So I don't have to do arithmetic parsing myself, which is really good. And then they're realized, hey, if we have a dictionary, why don't we also allow setting the value with a dictionary? So it's actually much easier to assemble styles programmatically. And with that, arithmetic becomes a lot more comfortable to do when you write JavaScript. And then they thought about, well, it is easy to do arithmetic. Why don't we support arithmetic as a first-class citizen? So I added a basic arithmetic method as well. And they would turn simple lengths into calc links automatically from the engine if the units don't match up. So this is really nice if you have something like Paul Lewis's flip where you have to take calculations and mix and match certain types together. It makes JavaScript much easier to write the CSS. The spec actually has a table that maps from property name to the types that that property can take. But as you can see, some of the types are not actually linked. That means they don't exist in the spec just yet. They just haven't been specced out. And for these, the API will currently fall back to the string representation. So right now, you still can't really handle transition timing function values. But that will get around to one of the future iterations of the Houdini-typed OM spec. And I think one of the most interesting parts of this is that we had a hack to implement it, or a premature implementation of the type to M in a custom build for Chrome, and did some performance testing on very high-animation websites. And it had an impact of up to 90% faster code, which I think is amazing, or just shows you how much time is being lost in converting back and forth between strings and numbers when you do animations. There is a polyfill that is being worked on. It is not complete yet, and it's not very easy to use at this point, apart from the fact that you should never use this polyfill in production because you add another layer of converting strings to numbers because the API doesn't exist. So we do it in the library for you. It's just a huge performance hole. But I still encourage you to use it and play around with it because it is very nice to just play, to see how this API feels. And we need your feedback for that. Because as of now, we have the chance to still change all of these APIs. They're just in the drafting phase, so nothing is really stable. So you have the chance to get involved. And I'm telling you, once you actually start using that polyfill and just play around with just manipulating CSS from JavaScript, you will say, that is so refreshing. That is so good. And if you want to get involved, do it. You have the chance. You would never want to go back, but for now, sadly, you have to. So that's typed OM. The next thing I want to look at is properties and value, which ties in very, very closely to the typed OM spec. And if you think about properties and values and think custom properties, you'd be right because it's very much related to that. At the current stage, custom properties have been a thing for a while now. However, they work on the basis of strings. And why wouldn't they? Because before Houdini, CSS was purely string based, at least the part that is exposed to you as a developer. And therefore, the usage of custom properties worked with single string replacement, which doesn't seem too bad. But let me give you an example. Let's say you wanted to implement a site menu, a site nav that is almost hidden away. And when you click it, it expands. Very common thing. And you realize that with the extent of expansion of that site nav is used in multiple places, not only on the transform of the site nav element as well, but maybe on the transform of the body or maybe as a padding somewhere, you're using that value in multiple places. So it seems kind of like a good idea to put it in a custom property because you're using that value all over the place. And then you don't want to just snap out the site nav. You want to animate it. So the idea would be to just animate the custom property instead of all the properties that use this custom property. So you write that. And as it turns out, custom properties are strings. And strings are not really animatable because you can't transition between two values of a string. What is 30% between string A and string B? The engine doesn't know that. So what will happen instead is it will just flip over from the start value to the end value at 50% of the time. And that's not smooth. That's just flipping out and doesn't look like a real animation. So that really doesn't work. And actually, things get even weirder with the current state of custom properties because when you do a weird misassignment by accident, for example, assigning a color to a width, it will just silently fail. And you will have no idea why it is not working. So debugging becomes really hard. Enter properties and values. Properties and values, the API, the new, is actually really small. It's just one call, really, and allows you to define more details about your custom properties. So what you can now do is you can say what kind of syntax the custom property has, which basically means what type that custom properties has. You can have control over if it inherits down the CSS tree or not. And what is really big in terms of custom elements is that you can give it an initial value. Custom elements use custom properties a lot to define their looks on the things they have in their shadow DOM or something they want to give special access to. And at the current stage, you have to define every custom property they define. And now you can have an initial value, which will be used if the value hasn't been overwritten. So that is really handy. So now we are able to write JavaScript that interacts really, really well with styles. And now we come to the spec that personally, I think, is the most exciting, because we have our basis of having been able to write JavaScript that works really well with CSS. And composite scrolling animation makes heavy use of that. Just to be clear, the spec is called Composited Scrolling in Animation while its main feature is called Compositor Worklit. And people tend to use these two terms interchangeably. But the question really is, what problem does it solve? Let's say you wanted to mimic an effect of a popular social networking app. You can see we are scrolling down, and the avatar changes position according to scope position, and it scales, and also the header bar fades in. And this would be kind of possible on the web right now, actually. You would hook up to the scroll event. You would start a request animation frame. And in that request animation frame, you would pull in the scroll top and do your math and change the transform. And if you've ever listened to Paul Lewis, you would know you would have to change WillChange to have it on its own layer, because otherwise you would have repaints. So it is really hard to get this right without ripping a huge gaping performance hole into your current app. So before I start talking about the Compositor Worklit, what are layers? I'm assuming maybe not everyone knows this. Layers are more of a logical concept that manifest as a separate buffer on the graphics card. Whenever a DOM element has their own layer, it means that on the graphics card, they will have their own buffer onto which they can paint themselves. And that means that changing the transform, like opacity, or translation, scaling, or rotation, will be a pure GPU operation instead of forcing repaints all over the place. So the only thing that needs to change when you actually change transform is the Compositor re-merging the images to the thing that you actually see on your screen. And this is really important because GPUs are really, really good at that. And we want to be as close as possible to 60 frames per second to make the app actually feel good. So as I said, this is kind of possible with the current approach when you really know what you're doing. And even then, we are still at risk of having an animation that lags behind. So this is a normal DOM, a normal diff box on a website where I'm scrolling just up and down. Looks normal, as we all know. But if I add another box that is statically positioned and try to use JavaScript to do the same thing, basically, you're re-implementing scrolling myself, it will most likely look like this. To be fair, this video is slowed down to make the effect more obvious. But even in real time, it is very, very noticeable and makes scrolling effects usually seem very laggy on the web. And this, exactly, is the problem that the compositor workload is supposed to address. It allows you to write a piece of JavaScript that runs in sync with the rendering pipeline instead of being async, because the problem we had previous is that we are consuming async data and are getting data that is mostly out of date, more often than not. So now we have a piece of code that runs in sync with the frame rate. And you get access to all the layers. And you can manipulate the transforms of these layers and do the kind of magic in there to implement what we just saw. And this is easy now, because we have type 2 M. So writing JavaScript to change transforms should be a piece of cake, eventually. But what exactly is a workload? I kind of jumped over that. So workloads are actually one of the additional specs that I left out in the original image. A workload is kind of like a JavaScript worker, but much more lightweight. They have been stripped down of certain access and have certain restrictions that I'm not going to get into too much. But they still run in a different thread. And that's the important bit, because whatever you run in a workload will not interfere with your main thread. So they do run in a different thread, but that also means they don't have access to the DOM. So we do have to find another way of doing that. Inside the ComposerWorklet, we can now define multiple animator objects, each animation to separate the code for different animations that you might have on your website. And as I said, we have no access to the DOM, so we need to find another way of doing that. And that's why the ComposerWorklet spec defines proxies. These proxies give the animator access to certain properties of certain DOM elements so that you have, although a very restricted, but a very precise access to the DOM in the ComposerWorklet without us having the risk of you modifying everything there. So it is very well defined what you're going to change. But let's look at some actual code. So let's say we wanted to implement the thing that I showed you a video of, the Twitter-like header bar. The first thing that we're going to do is we're going to import a JavaScript file into our ComposerWorklet with this statement. The script will be evaluated inside the ComposerWorklet, and it's just a normal JavaScript file with its own scope. But as I said, no access to the DOM. And inside this work, we can define a class and can register that class as an animator class with a name. We need that name to get a reference to it in the main thread later on. And inside the class, we mostly have two important functions. The first is onMessage, which allows us to have post-message-style communication with the main thread. And the second is the tick function that will run every single frame in sync. So it is blocking. So that means, also, it's not necessarily true. It does not necessarily run every single frame. But it's a good assumption to work with because it very well can happen. But it means that the code in the tick function needs to be efficient, like really efficient. So the tick function would basically contain what we had in our request animation frame before. We would get the scroll top. We would do the math around the transform and set it on the avatar. Back to the main thread. After we have called import, we get back a promise that resolves once the Composer work that has evaluated the entire JavaScript file. And now we can create an object that is a reference to this animator class that we just registered on the other thread. On that animator object, we can call post-message to basically create composite proxies that are proxies for the elements that I'm calling there. So for the window, because we want to have access to the scroll top, and to the avatar and the bar, because those are the elements that we want to change to transform our opacity on. This is kind of important because this allows the engine to optimize when our code actually really needs to run. If scroll top didn't change, we don't need to do anything. But if we are scrolling, we are actually changing scroll top every single frame. So our code will run every single frame. So the performance assumptions about this code still need to hold up. And that's actually kind of it. I mean, I skipped over the math and the skip function. But this is now something that would run in sync with the compositor engine and should look really nice. So I recorded this video, which is recorded with a custom build of Chromium, where we have kind of a hacked implementation of the compositor workload. And you will just have to believe me when I tell you that the difference is really notable in terms of how reactive the scrolling is. Chrome engineer Robert Flag actually wrote a polyfill for this as well. So again, this is without any performance benefits, because these kind of things are by definition not polyfillable. But again, it gives you a feel for the actual API itself. So the polyfill uses the approach I mentioned earlier on, request animation frame, and hooking up to the scroll event, and these kind of things. And in the repo, there's actually also the code for the demo I just showed and two more demos. And each of the demos follows have a link to a video where I record these demos in a Houdini enabled browser so we can actually see the difference to how your browser behaves and how Houdini browser would behave. The future will tell how this works out. Lastly, I want to talk about the Paint API, also called Paint Worklit. The API allows you to define your own Paint routine for elements, basically enabling you to define your own background patterns or border styles or even your own shadows. They're still working on where to draw the line. But effectively, it is somewhat like a canvas. You might be asking, why not a canvas? Well, mostly because a canvas is really, really heavy weight. It's a very memory heavy and processor heavy element. And also, it has a separate resolution that doesn't scale with the element itself. So you get these weird stretching effects if a canvas gets rescaled. And you don't have good control over when a canvas needs to repaint itself. But how do you use the Paint Worklit? Basically, it works as a background image where instead of an actual image, you now use a function to your own paint implementation. So the spec is pretty fresh, so I'm not going to do too much wall things, but I'm going to take this example, which is straight from the spec. And the goal of this example is to show a temporary image as long as the actual image that we're linking to has not been loaded yet. So we're going to register a custom property that takes the actual image. And we're going to use our Paint function to draw the temporary image as long as the actual image has not been loaded. So that's exactly what we're going to do. We're going to register a custom property with a syntax image. So that means that the browser can tell us when we're actually assigning a length to this value. And we actually need this to have deeper access on the JavaScript side. And then we're going to work, again, with a Paint Worklit attribute and just import a JavaScript file into that Worklit. Inside that Worklit, we will now register a class. And the first important thing is, again, that we tell the engine which properties we actually depend on. Again, this allows the browser to kind of optimize when we get to repaint ourselves. We are saying we only depend on the image because when the image changes, we need to repaint the new image. But we don't care, for example, about margin left. If margin left changes, we don't need to repaint. Everything is the same. And then we're going to look at the actual Paint function. The Paint function gets a context, which is very much a Canvas context, just stripped down of any kind of read back. Because for security reasons, you should not be able to just inspect what the element looks like just because you're a Paint Worklit. We get the geometry of the element. So this is one of the advantages of our standard Canvas, that we know the size of our element after layout, because painting happens after layout. And we get a dictionary with all the properties that we defined we need access to. So what we're doing then is that we are requesting access to our image property, custom property, and switching over the state of that image. Because we have type 2n, we know it is an image, and that image has a state attribute that tells us if the image has been loaded or not. So if it is pending, meaning it's still being loaded, we draw mountains in this example. If it is invalid, meaning the loading has failed for whatever reason, we draw a sad face. And if the image has been successfully loaded, we draw the image itself. That's with the least immature specs of Houdini right now. And now I just want to quickly cover the ones I left out so far from the image. And as I said, most of these are kind of empty or just symbolize a future idea that needs to be fleshed out. But they're still really exciting. And one of the things that is really exciting is the layout work of it. The spec is pretty empty, is what I would have said last week. But we had a meeting of the Houdini work for Slack week. And actually, Ian Kilpatrick proposed a complete, I had a proposal for the API. And it's pretty hard to grog. But still, it would allow you to write your own layout for CSS. So something that would be possible in the very near future, for example, would be to have an actual implementation of masonry layouts, something that the web doesn't have right now. And it would be implemented with the very same tools and native browser engineer would have at their disposal because this is a very same API. So once this lands, this would allow great polyfillability for the future of the web. Because any layout ideas that come up can just be implemented in the very same way. A browser engineer would can be tested. And if they actually prove to be useful, browser engineers can port the JavaScript implementation to C++ or whatever the browser uses and become an actual native implementation. Another great use case would be grid style sheets, which is, as I mentioned earlier, a constraint-based layout system, which is really interesting, but right now runs in the main thread and makes your site really behave really weird at times, especially when layout is being triggered. So I guess they would benefit a lot from having a native or almost native implementation using the same APIs. The font metrics API is exactly what it sounds like. It will allow you to finally know the answer to the question, how big is text X in font Y with font size Z going to be? Because right now, this is surprisingly hard. Keep in mind that this calculation actually gets really, really complex. Because most of the time, text wraps. And you might need to have access to not only the entire bounding box, but the individual lines of the rep text. And text often changes the font without you noticing. Think about emojis. Emojis are different fonts than the font you're actually using. So you have different metrics. And that it's just very complicated. It can't be polyfilled. We need browser-level support for this. So one example would be auto-sizing text, some text that adapts to the width of the window. How do you do that right now? This is an implementation by Paul Lewis, where he does a canvas and an iterative loop. And he uses black magic. And it's just bizarrely difficult to achieve this at the coincide. And this is something that should become easy eventually with this spec. The parser spec is arguably the most ominous of them all. Because there's nothing in there. I just talked to some engineer who had the idea. And he was like, yeah, this somehow works with a string. And then you can modify it. So I tried to ask a little more. And he gave me a rough explanation. So you give him some string. You get back abstract syntax tree-like object you can easily modify and turn into actual CSS directives, feed that to the CSS engine, and turn it into styles. You would have hooks for add directives and for properties and functions. But that's all I know really. The future will tell. They're still working on it. So this is something to keep an eye out. But now for the interesting part. Due to, let's call it unfortunate timing, you can't use any of this. However, a subset of these APIs, these three APIs are actually in trunk of Chromium. And that means that we're pretty soonish going to have them behind the flag in Canary. And once this lands, I will tweet it out every day and make people use it because we need your feedback. We want you to use these and see if it solves problems for you and if you are able to understand and use them. So you should be able to play around with them soon. I wrote a blog post on web updates, which I basically reiterates this talk. And my goal is to keep this blog post up to date with the most recent progress and developments in the current state of the APIs of Houdini. So go there, keep an eye out for any updates on when it actually lands. And once it has landed, I would encourage you to play around with it and give us feedback. So go there and leave something in the comment section or use Twitter. If you have a problem with the API or if you think you have a suggestion that it's worth considering, if something turns out to be good, we still have the chance to actually incorporate it into the spec that are still all evolving and are all drafts. So Ian Kilpatrick, who I mentioned earlier, is the Chrome engineer on Houdini lead on the entire project from Google's site. And he said that he's really eager to hear all of your opinions and wants to hear your feedback. So go annoy him on Twitter or go annoy me on Twitter. We really want to hear what you're doing with this entire thing. And as that, we have still a chance to change how things shape up in the future. So I hope that this kind of makes you understand what the extensible map manifesto is all about when they say they want to expose low level features to you. This really gives you the power to over the future of the web because we are exposing the building blocks so you can build everything on the web. Also it makes it much easier in general to keep up the pace at which the web has been developing in the past because you all know that there's all these plans for great new features on the web and then it takes two years for everyone to adapt. If we expose these low level APIs, we can do polyfills that actually work and you can do work as if the browser already had implemented themselves. This will basically is one more step in future proofing the web against all the ideas designers will come up with eventually. So I get it, we're kind of in this painful spot right now between we have a solution for all the polyfillability but it's not implemented yet. But I think the prospect is exciting enough to just power through the section and just try to get to the end as fast as possible. If you have any questions about this, I'll be around and I'll be happy to talk to you all. Thank you for your attention.