 My name is Jeffrey Hussain, and I work for Netflix. I am the architect of FALCOR, which is Netflix open source data platform, some of you may have heard of, and I'm the Netflix representative on the JavaScript Standards Committee. So a lot of you out there are probably familiar with ES6, which is the most recent version of JavaScript. And today we actually have a really good browser support, at least for the major desktop browsers, for ES6. So things are looking good, and the time is right to talk about the future of JavaScript. Now, this is the last time you're going to hear me talk about versions of JavaScript. Used to be titled ES6, is now called ES2015, and going forward, we're going to be releasing a new version of JavaScript every single year. So ES2015, 16, 17. Now, the reality is, these versions don't really matter. In fact, what matter, because what's basically going to happen is, at the end of every year, whatever features are ready and mature enough, we're going to put in the spec and they're going to make it in the language. So what actually matters with regard to the maturity of future JavaScript features is this maturity stages. So in order for me to responsibly talk with you about future versions of things coming down the pipe in JavaScript, given that they're not really done yet, given that they're not really baked, is to talk about maturity stages. So as features make their way through these maturity stages, we sort of say to the community, hey, these features are getting more and more mature, and they're getting more and more baked. And by the end of the year, whatever has made it to stage four makes it into the language spec and gets shipped. Now, the reason why versions don't really matter so much from my perspective of you, JavaScript developers, is that you can start using some of the features I'm going to be talking to you about today, just by using Babel, for example, which does compilation from new JavaScript features and future versions to the existing versions of JavaScript supported by your browsers. So some of the features I'm going to be talking to you today, you can actually go away and use right away by using something like Babel. And as you can see, Babel actually has this little piece here where it supports multiple stages. So you can just say, hey, look, I want to use this stage four feature or this stage three feature. And you, as developers, are accepting the risk that if you use a stage four or stage three feature, it might change by the time it actually makes it into the standard. So later, the version, the more mature it is and the more likely it is to not change. Now today, I'm going to be talking to you about a very specific question that is very important to me as a developer. I think a lot of JavaScript, people on the JavaScript committee, stay up at night thinking about, and that's how we can make asynchronous programming easier for you. Now, obviously, JavaScript is a single threaded programming language, right? And we do a lot of UI programming in JavaScript. And in order to make sure that our UIs are responsive, we need to make sure that we write asynchronous code, right? Because if we write code that's synchronous and synchronously makes a network request, it blocks and then our applications are not responsive to users. So starting in ES 2015, ES6 now titled ES 2015, we introduced this notion of promises to make asynchronous programming easier for you as developers. So promises are very simple. Promises are an object that represents a value that you'll eventually get asynchronously. And to consume them, you simply call the then method and you pass two callbacks. One callback, the first callback gets called if the promise succeeds with a value, and the second callback gets called instead if the promise errors. And then you get that error. So promises have a very simple API. And in ES5, you could take your functions that were synchronous and rewrite them asynchronously using promises. So here we see we have two sequential operations. One to get the symbol for a particular stock. And then once we've got the symbol for that stock, we make a follow up request to get the price. Now, if we were doing this synchronously, things would block. So we don't want to do it synchronously. We instead, in the new version in ES6, we have these functions return promises, and then we chain these sequential operations using then. So that's great. If all you're doing is do this, then do that, then do that. But some functions, well, they're a little more complicated than that. So here's an example of a function that's a little more complicated. What if we might try one service to get the price for a stock, but then if that fails, we want to try another service, and we want to retry it multiple times. This particular function is a little bit more complicated than the one we saw before. And it's not immediately obvious. It's possible. But it's not immediately obvious how we would take it, and we would convert it over to use promises. So we're not entirely clear how that would work with promises. We'd take some thinking, probably some recursion in there. So what we on the JavaScript committee have been asking ourselves is, look, why does synchronous programming and asynchronous programming have to look so different? I mean, there's no difference if a function returns a value or it delivers it to you in a callback. Why should you as a developer have to mangle your code just in order to change in which direction data is flowing? So that's why we have a very big feature coming in a future version of JavaScript called asynchronous functions. And these are stage four maturity level, which means they're very mature, and it's probably OK for you to go and start using them right away. So here we have that original function getStockPrice with its loop and its try catch. And here is how you would write the same function asynchronously in an upcoming version of JavaScript. So here I'm introducing the async function concept in JavaScript. So notice that this code is nearly identical. The first thing you'll notice that's changed is we have this async keyword. And then we take all of these functions that now return promises, and all we have to do is put the await keyword in front of them. Now what happens is that although this code looks synchronous, at runtime, JavaScript is going to suspend the execution of this function until this promise resolves and then resume at that point. What that means is that we don't have to worry about callbacks, and then notice that this function now has no callbacks whatsoever, and JavaScript worries about executing our code sequentially after the async function resolves. So that means we get to use all of the control flow constructs that we know and love, while loops, try catch loops inside of asynchronous functions, and we don't have to think very hard about how to turn our code inside out. So what that means is coming up in a future version of JavaScript, I think the big story for future versions of JavaScript is symmetrical support for synchronous and asynchronous programming, and that's a beautiful thing. So that's great, and it's great that we can do all the same things with async functions as we can do with regular synchronous functions. But you know what, we have an additional concern when we're worrying about asynchronous functions, and that's cancellation. Asynchronous functions, unlike synchronous functions, tend to run for long periods of time. And for those of you developing user interfaces, we know that sometimes users are fickle, right? They open up a form, and then they see a loading bar, and they get bored, and they hit the stop button, and if they decide to cancel the operation, we want to just stop doing anything, like whether it's make future asynchronous requests. We don't want to do that anymore, and we want to be able to cancel and stop just like the user has told us they want to do. So starting in a future version of JavaScript, we're going to have a new notion called cancellation tokens. A cancel token is actually a very simple concept. It's really easy to understand. So you can create a cancel token using this method, cancel token.source, and it actually hands you back a pair of two things. One is the actual cancellation token, but then the other is a cancel function. So let's pass this cancellation token to our asynchronous function, the one that we just saw, getStockPrice. So as we see, we just pass it as another parameter to the function. Now if later on, while getStockPrice is doing its work, making a request to resolve the stock name to a symbol, then making a follow-up request to get the price, if we decide, you know what, I don't want to do that anymore, and we cancel, we just invoke the cancel function, and what that does is it causes the token to cancel, and then our getStockPrice function, whenever it finishes some asynchronous operation before going forward, it's gonna check that cancellation token, and if it sees that the cancellation token is canceled, it's just gonna throw, and because it's throwing, it's never actually gonna invoke our callback here, and it's effectively gonna cancel, and that's how cancellation tokens work. So now let's see what this is like for the person actually writing the getStockPrice function, because they have to accept this cancellation token. So let's take a look at what getStockPrice now looks like. So here's the async function we saw earlier using the new async keyword, and here's what it's gonna look like to write an asynchronous function that supports cancellation. So as you can see, we accept the cancellation token, and then after every await statement, we just go cancel token, throw if requested. In other words, if somebody called that cancel function that we saw earlier, this is gonna throw at this point right here, and effectively the rest of this code, notice that getSymbolPrice will never get executed, and therefore we save a whole network request that we need to make because they wanna cancel that function now. So that's a lot of boilerplate, as you can see, if every single time we have to write a wait, we have to write cancel token, dot throw if requested, it can add up, it's a little bit of extra boilerplate. I think most of us wouldn't wanna write that boilerplate. Part of the promise of asynchronous functions is you don't have to write much more code than if you were writing synchronous functions. So how do we accomplish that? Well, we add a special contextual keyword called await.cancelToken, and this effectively replaces the code that you saw earlier. So these two pieces of code are actually equivalent. And so now with that done, we can see that we can actually support cancellation in our asynchronous functions by adding only one extra line of code. So hopefully that's clear to you. Cancellation tokens are pretty simple, right? They're just a shared object that you pass around between your asynchronous functions. And if you cancel one, it's the responsibility of that asynchronous function to throw and stop executing. So asynchronous functions, I think, I hope a lot of you are excited about it because they're pretty great, right? For those of us writing a lot of async code and probably a lot of us are using promises already, by the way, can I see a quick show of hands? Who's using promises out there? A lot of people, right? So this is pretty powerful syntax that you can just use out of the gate. Now that's great, but there's also other types of functions in JavaScript. Some of these other types of functions were introduced as well in ES6 or ES2015. One of those is the generator function. Now just a quick show of hands, I'm curious, how many people use generator functions? Or it may be indirectly, some of you use them in libraries, right? So fewer hands, but it's a very powerful type of function. So a generator function, basically it's a function that returns multiple values. So a generator function can return multiple values and when it's finished, it'll just tell you, hey, I'm done. So here's an example of a generator function in ES2015. So in the old version of JavaScript, if you wanted to return multiple numbers, you'd probably just create an array of numbers and return it. So notice here, this is a little bit different. We see this yield keyword and this yield keyword looks a little bit like a return and that's a good mental model for thinking about how yield works. It's sort of like a function with multiple return statements. And what it does is it allows you to return multiple numbers but instead of collecting them all up in an array and then returning them, you can return them progressively on demand as the consumer requests them. And that turns out to be a pretty powerful thing. So here's an example of how you use a generator function. So the first thing you do when you call a generator function, you actually get out this object called an iterator. And I think some of you are probably familiar with the concept of iteration. It's been around in computer science for a long, long time, 20 years, possibly longer. So you get an iterator and then you call next and then you get back this pair, this pair of values. One is the value that you, the next number that you requested and the other is a Boolean saying whether or not there's any more data. And so it turns out, yes, we have more data. So I'm gonna call next again. And once again, we get another double, a little pair that says yes, here's your number and yes, we do have more data. So we call next again. And so finally we get a pair of the value and then the done property is now true. So we know we're done. And that's how a generator function works. It allows the consumer to pull values out progressively of a function and effectively lazily evaluates that function. So under the hood for those of you who are curious about how yield works, one way of sort of figuring out, intuitive how yield works is to imagine how you would write that iterator yourself if you had to. Well, you'd probably have to set up a state machine. So as you can see here, the first thing is this little example, this Fibonacci sequence, which is of course the ubiquitous programming example. We return a iterator. So you can see here that's that outer object with the next method. And then if you keep calling it, each time you call it, this function checks what state it's in. So it effectively figures out every possible state that the function can be in and then it just keeps moving through those states. So every single time you call next, it moves through the next state and then it returns one of these pairs. So that's all it's doing. It's building a state machine. But iterators are really powerful because you don't have to build that state machine. You just write your code top to bottom. It's a little like async functions because you just write your code top to bottom as if that code's executing and every single time the yield keyword comes up, the program might suspend, but then it'll resume from that yield point the next time you call next. So if we were to sort of like get a mental picture for how iteration works, what you do is you have a consumer and a producer. The consumer requests the iterator and then begins sort of pulling values out. And so we get 42, we pull out 39. And finally, we make our last request, we pull out that and we get a pair back that says, okay, 19 and done. And that's how we know we're never supposed to call next again. So generator functions are great, but they have to produce values synchronously. And that doesn't make them very useful for some of our asynchronous streams that we have to deal with. So what are some examples of asynchronous streams of data that we have to deal with in the browser? Well, I can think of one that probably a lot of us have had to work with, which is web sockets. Now, one of the reasons I can't use generator functions for web sockets is because, well, when you get a value from a web socket, it's asynchronous, right? It calls you. And when you call next on a generator, it blocks until it delivers you that value. And so it's kind of unfortunate that we have this nice function in JavaScript for requesting values, streams of values, but we can't use it for web sockets. So what's the solution? So what if we want to create an asynchronous stream of values? Starting in a future version of JavaScript, we are going to have the asynchronous equivalent of generator functions. And this is gonna allow you to do things like work with streams, IO streams of data like web sockets in a much more elegant and fluent way. So here's an example of how we can consume iterators. So earlier I showed calling next, calling next. Well, thankfully in ES 2015, we have a nice syntax for that. It's like a little for loop. And you'll see that we have a similar syntax for consuming asynchronous iterators soon. But here's how the producer looks. So if I want to create a generator function, right? And I want to read data from a file. Now I can actually implement this in node today because node actually supports synchronous IO. I don't recommend you do that because if you use synchronous IO in your web server, you're not gonna be able to serve other multiple concurrent requests while you're reading data from a file. So while this is possible, you probably shouldn't do it, right? So here's what it's gonna look like to do the asynchronous equivalent of this in a future version of JavaScript. So I'll do that again just in case anybody missed it. Right? Not too shabby, right? So all we did, once again, we added the async keyword and we just go await in front of any asynchronous operation. So in node, you could do something like this in the future to write code that consumes data from files and it's not gonna block. Now that's great for the producer. Hold on, I'll just go through this little consumption side. So if we wanna consume this, just like the previous example, it's pretty much the same. We requested asynchronous iterator and then we call next but now notice when we call asynchronous iterator.next, we're actually, instead of getting back a pair of the value and a boolean indicating whether we're done, we're gonna get it back a promise of a pair of a value and a boolean indicating we're done. So I just call then and I log the result. I call next again because I'm not done. And we see that the code on the left hand side resumes from where we left off. And finally, I keep calling it until I get my final value. So just an iterator of promises of pairs is kind of the right way to think about an asynchronous iterator. So once again, we'll visualize this process with an asynchronous iterator. We've got a consumer, we've got a producer. The consumer requests an asynchronous iterator from the producer and it requests a value but this time the producer gives a promise to the consumer and then the consumer resolves that promise and gets back a value. And we keep going like that. Now notice at the end, we just go ahead and we return this last value but it always gets wrapped in a promise so even though we're just returning 19, it's still gonna get wrapped in a promise so the consumer doesn't have to worry about getting a value or a promise and that's similar to the way that the promise then method works. If you remember, from your function inside of then you can either return another promise or a raw value and it all gets wrapped up into a promise so we see asynchronous iterator works the same way and so we're finally done, we stop. So we saw what it's like to produce an asynchronous stream but what about consuming an asynchronous stream? How do we consume it? Well in ES6 we added this nice sugary syntax for of so we don't have to write that next, next, next and that while loop, right? So it's a lot nicer to consume data this way from a stream. Starting in a future version of JavaScript we are going to see this. So as you can see, we've got four await, right? And everything else is pretty much the same just sprinkle await through keywords throughout your function and it pretty much just works. So you don't have to think very hard once again the focus is on symmetrical support for asynchronous and synchronous programming. So where are we gonna use this? Well, pretty much this is really appropriate type for any kind of push IO, right? Whenever we've got IO that's being asynchronously handled one such case is obviously web sockets but another case could be node streams and it's quite possible in the future both of these types could implement the asynchronous iterable contract which means you can use four await on them inside of asynchronous functions. Makes sense? So there are some streams that are not very patient. They will not wait for you to be ready and request another value from them. Now perhaps the most notable example of such a stream in the browser is event target the event target API which I think a lot of us in the room know better as add event listener, remove event listener. The thing about DOM events is they don't particularly care whether we're ready for them or not. Whenever somebody moves that mouse or whatever the DOM loads it's gonna call our callback right away and we have to react to those callbacks. So asynchronous iterator isn't really an appropriate API for that type of thing, right? Asynchronous iterator is for when the consumer's in control, right? When you wanna decide when to pull the next value. So we need another type for these push streams. So here's an example of some push streams, DOM events, set interval, right? Which calls a callback every few seconds, right? Set timeouts actually probably not as good an example. That's probably something better for a promise because it always calls you once. So the interesting thing about the web is that there's no standard observable interface. We have an iterable interface which is now standard in the latest version of JavaScript but we have no standard way of observing data. So you have a lot of different APIs which can push you and call callbacks but there's no one interface to capture all of them in a consistent way. So here we have another proposal for an observable interface in the upcoming version of JavaScript and that's currently at stage one. So I'll tell you a little bit how it works but I think you're gonna start to recognize a pattern as we go through it. First of all, here's what the interface looks like. It's actually two interfaces. We have the observable interface which accepts an observer and an observer is really like a batch of three different callbacks. It can push you a value. It can push you an error or it can push you a completion message. And the idea behind an observable is it's really to take the iterator contract that we showed you earlier and kind of flip it inside out. An iterator, as we saw earlier, you can keep pulling values out until finally you get that little pair that says you're done. And another possibility is that when you call next on that iterator, just like when you execute any JavaScript function, it could throw. And so there's really three semantics that that iterator supports which is give me a value, tell me whether I'm done or not and possibly throw an error. And so what we're doing on the committee is we're trying to take all the same semantics that you have in the iterator contract and we're turning them inside out for the observer contract. And so instead of you calling next on an iterator, you provide the observable with an observer and it calls next on that observer and pushes the data to you. So the difference is really about pulling data out which you do with an iterator or pushing data at you which is what's done with an observer. So let's see what that looks like. So we actually have, as I said, these two interfaces that work together. So let's visualize it. So let's pick up where we left off with iteration. So we have, we still have a consumer and we still have a producer. But now the producer is going to be the observable and instead of a consumer requesting an iterator from the producer, it's going to provide an observer to the observable. It's going to provide an observer to that producer. So we pass in the observer and then the producer actually pushes data at us by calling our callback. So it calls those three callbacks, observer.next pushes it 42 and then it'll just keep pushing. So in this case, notice the big difference here is the producer is in control. Not the consumer. I don't decide when I get the next value. The producer decides when I get the next value and that's how DOM events work and that's how said interval works, right? So we're trying to capture an interface, trying to pick one nice consistent interface for handling this type of interaction. And so the equivalent of giving us a pair with that done, colon, true on it is actually invoking our complete callback. So it's the same semantic. It's handled a little bit differently because it's more convenient to get a special callback for when we're done but it's basically the same three underlying semantics. So one of the nice things about observable is that it actually is gonna support the same mechanism of cancellation that the asynchronous functions we saw earlier are. So optionally, you can provide subscribe with a cancel token because let's say you subscribe up to an event and you decide at some point, you don't wanna listen about an event anymore, right? Today you would call remove event listener but the equivalent of that in the new world is you're just gonna be able to take that cancellation token and then call cancel on that, you're gonna cancel that cancellation token and then the observable is gonna check that cancellation token and if it's canceled, it's gonna unsubscribe for you. So no more add event listener or remove event listener, you're basically just gonna be able to use the same cancellation token you're already using in your asynchronous functions and thread them right down to events. So as you can see, you can call subscribe and pass in this observer object. That's what's right here with the next complete and error functions and then the optional cancellation token. So that's how the observer works and if we were to go ahead and run this, we might get pushed three values in a row and that last value is gonna say done. So that's how you would use an observable. So what are some of the APIs on the web, the web platform APIs that are gonna use observables? Well, we have DOM events and said interval are obvious cases of two interfaces out there that would make sense to be able to use with observables. I think it'd be really nice if this and any future push APIs just all use the same interface because then we can just think about them all the same way. Now, one of the big benefits, so you don't have to write out event listener or remove event listener, so you subscribe to set time out slightly differently. Is this really that big a deal? Well, yeah, it is. The reason why we should have one interface for all these types of push APIs and one interface for all these types of async IO APIs is because of something very powerful, composition. We as developers need to build, need to control complexity and build large projects by taking small elements and composing them together. And one of the really powerful things about these types is that they compose. So here I'm gonna show you an example of what it might look like. Let's say I wanna take a web socket and I wanna look for big price spikes in a particular stock. Well, here's an example of how I could use a library like underscore or low-dash. Somebody could write a library like underscore or low-dash. For those who aren't familiar, these are a collection of functions used to compose types together. Somebody could easily write a library like low-dash or underscore over our asynchronous iterable interface and now all of a sudden, we can create with just a tiny bit of code, a stream that only calls us when a price spikes on a stock. So this is an example of looking for price spikes in the Johnson and Johnson stock and just consuming that data only when we get a price spike. Likewise, you can take observables. I'll just pause for a second to let that... So notice we're here using the filter method on arrays. Just like you can use the filter method on arrays, you can use the filter method on asynchronous iterables and it does the exact same thing. And this scan method, for those of you who aren't familiar with it, it basically behaves like reduce over arrays except it gives you every intermediary computation. So unlike reduce, scan is just gonna sort of keep this stream open and give us every single price spike as it arrives for the lifetime of this web socket being open. So you can perform progressively more powerful computations. And for those of you out there who are familiar with functional programming, you know that this is a very powerful and fluent way of writing programs. And so this is one of the big wins out of having a single standard interface for all of these different data sources. Suddenly it becomes really easy for web developers to compose them together. So in addition to composing asynchronous iterables, this is one of my favorite little examples of what it looks like to compose observables. So imagine tomorrow every single DOM event was suddenly exposed as an observable. So you actually had an object that represented a DOM event and it had these powerful methods like map and take until and merge all. I can actually write a drag and drop event by composing together a mouse up, a mouse down and a mouse move event with just a few functions. Now you don't have to fully understand how this code works, but there are libraries out there already to help you compose together these observables because observables are just a library, so you don't have to wait for this to come out in JavaScript. You can actually use libraries like RxJS or BaconJS, which already exists. Think of them as like underscore for events or low dash for events for those of you who are familiar with this. And it's actually a really powerful way of building asynchronous programs. So this is just an example of a tiny little spice of code to write a mouse drag. So I'm gonna wrap up here. I actually have more, I honestly, I had a lot of stuff to talk about with ES6, but I only had 30 minutes. But if anybody has any questions about future ES6 features, I may actually have some slides about them. But I'm gonna wrap up at this point because I'm pretty much at the 30 minute mark and just give you my tweet handle there, which you can follow. And I talk about a lot of TC39 stuff. Now, if you wanna start using these features, make sure to check out Babbel. I'm curious, how many people are using Transpillation out there already? Okay, so the good news is you can use some of these features already and some of you may indeed already be using them. And you can also check out es-discuss.org, which is where the TC39 goes to engage with the community and talk about these features. If you wanna use these features and come back and express your opinion, tell us where those pain points are, that's really valuable to us on the committee. That's one of the reasons why we on the committee love Transpillation. It gives you the opportunity to play with these features now and you know what, if they don't work for you or there's pain points around them, you can let us know before they're fully baked and then we put them in the language where they're stuck there forever, okay? Cool new JavaScript features that with backwards compatibility in all the features, how do we avoid becoming overloaded if we like C++? So JavaScript fatigue, who's heard the term JavaScript fatigue, right? Oh yeah. There are a lot of people out there who might be, especially, you know, we see some of the stuff coming out, right? With these frameworks, we've got language features, how do we avoid being overloaded? Well, the reality is, you know, when you're learning about abstractions like this, right, I think personally, writing that example we saw earlier with async functions is a hell of a lot less complicated than using callbacks to do it. And my personal opinion is that a lot of us are just familiar with what we already have. And yes, we will have to learn a new thing. But personally, I would much rather use the syntax that we saw today than try and build programs as a mess of callbacks. So I think most of the features we've introduced today are gonna make JavaScript developers' lives easier. Yes, you will have to learn things. And a lot of people have a lot of anxiety about JavaScript because look, it seems, I think to some of us, like JavaScript's moving really fast because for a long period, it was completely dormant. Basically, it didn't change for almost 10 years. And now all of a sudden, it's changing, it's changing. And I'm here to tell you, it will continue to change. So I'm not gonna reassure you, it's gonna change more and more. There's more and more features that are gonna get added to JavaScript. And that's what happens with languages that people use to solve real problems. We're gonna continue to evolve that language. It really doesn't profit anybody for that language that people are using in an active way. And we see the way the web's changing so quickly to just set it in stone. The reality is you're gonna have to learn more about it. It's an evolving, changing, living language. And so I'm not gonna reassure anybody today that JavaScript's gonna stop moving. I think it's gonna keep moving. I think that's a great thing. Which leads me neatly into a question that's been asked multiple times, actually. With the new version of ECM script, JavaScript coming out every year, will we ever be able to avoid an intermediary transpilation stage? I could give a more diplomatic answer to this, but no. You'll be transpiling forever. And the reality is, do we care? I mean, how many people here have a build step for their JavaScript? How many people's build step is actually hitting F5 in the browser? Very few, right? I really, the reality is we now have source symbols in JavaScript, which means that after you compile or transpile your JavaScript, you can even debug the JavaScript inside of your browser in the original form in which you wrote it. And that's really the right move here. The right move here is to continue evolving the language, starting to use the features now immediately with transpilation and providing great tooling support for allowing you to debug those features. So increasingly, transpilation feels less and less intrusive. So you'll be transpiling forever and we should be happy about that. I don't really, I don't think it's actually disrupting people's build process very much. In my experience, it's people who don't have a really good build process are the ones who are concerned about this. And my advice is get a good build process. I'm tremendously excited by your talk actually, because it feels like there's real joined up thinking in TC39 now, like with the composability, reusing the same syntax, et cetera, and addressing real problems, not just syntactic sugar stuff. And that's really exciting. Could you repeat the place where people can go and talk to TC39 and give feedback, please? Yeah, absolutely, it's esdiscuss.org, right? And so one of the great things, as I mentioned earlier, about so many people using transpilation is that you can try stuff out and you don't like it. If you don't like it, feel free to tell us, right? We can take the criticism. And that's part of the idea behind the staging process. Before transpilation, actually, those of us on the committee were in a very difficult spot. When we designed a new feature, we had to go to browser vendors and we had to say, hey, can you guys kind of try this out? And it turns out browser vendors are really busy. They've got a lot of things on their mind and they're doing a lot of things and they don't want to spend a lot of time implementing some feature that might eventually make it into the language spec, right? Because then usually you also had to rely on developers or users to actually turn on this compatibility flag because instead of just unleashing new features under the web where people like to take dependencies on them, browser makers wanted to put them behind this compatibility flag. So people had to go into some config menu and turn them on and web developers, likewise, didn't want to use some feature that most people wouldn't even have available to them. So transpilation kind of breaks that cycle and it allows you to use new JavaScript features in a very low risk way and also provide us with real feedback about the ergonomics of that feature. So transpilation's been huge, I think, for the committee and I think it's gonna be huge for the web. I want to reiterate as well because I've been active in the HTML and the CSS spec and I'm deputy CTO of Opera, the browser. Tell us what you need, tell us what you think because there's a million problems to be solved in browsers, in HTML, CSS, and JavaScript. We need your feedback to tell us which of those million problems we should prioritize. Please tell us your feedback is really important. Last question is one from me because I have MC rights and I can ask a question of my own. Great. If you couldn't wave a magic wand and spec a JavaScript, a new from scratch just by yourself, would it look anything like the JavaScript we have now? No. No. Very diplomatic. Could you expand a little bit on how that differ? You know what the reality is, JavaScript, the lion's share of JavaScript was actually delivered and designed in about 10 days by Brenner Dike. Honestly, the fact that JavaScript works as well as it does when it was designed in 10 days is an incredible feat. And I don't think you'll find many people in the industry who say JavaScript's perfect, it's a great language. But personally I gotta give a lot of respect, I got a lot of props to Brendan Eich who did a hell of a lot better job in 10 days than I did. Now, there's a lot of JavaScript haters out there, people who think JavaScript's not a great language, particularly those who come from other languages who some people might say are better. And I have this to say, frankly things could have gone very differently. For those of you who were around when I was 10 or 15 years even, writing browser applications, just with a slight twist of fate, we could all be talking about VB script right now. So just counter-checking, things could be a hell of a lot worse, right? So there's that one point. I think JavaScript is an imperfect language and the reality is the reason why we're gonna try and continue to evolve it and the reason why you see people using things like ESLint rules for example, and a lot of us have read Doug Crockford's JavaScript the good parts, is the idea is we wanna find the core of JavaScript that's really good, right? Then we wanna build on top of that core and make that core easier and easier to use and make it so smooth to use the good parts of JavaScript because there is a really good language embedded in there somewhere. Beyond all the terrible parts, the automatic implicit type conversion and automatic semicolon insertion, things that we never would have done had we designed JavaScript today. There's actually a beautiful and elegant core to JavaScript. And our job is to make users, make it easier to use that core because we can't take features out. We cannot break the web. So those features are gonna be there forever but at some point they're gonna look like fossils because you wanna use them. You'll wanna use the cool new hotness, right? It's not about taking features away, it's about making it really fun and easy to use the good parts of JavaScript and building on top of them. So unfortunately, JavaScript will continue to be a very imperfect language and there's a lot better languages out there you can design personally, I think Elm is a way better language than JavaScript. I'd much rather write most of my code in Elm, right? But you know what? I think we're gonna find a great core in there and I think we're gonna make it very usable for the vast majority of developers. Excellent. Thank you very much. Jaffa, Jaffa is saying, ladies and gentlemen.