 So, yeah, my name is Nicholas Bloom, leading product management for WebRTC. I'm super happy to be here with my two colleagues. So I'm a part, part of Oglian, and I'm a software engineer in WebRTC, and I work with front-end audio processing like echo cancellation, noise suppression, and other stuff. And I'm Justin Uberti. It's great to be here today. I think the fantastic quality of the presentations is just phenomenal for this event, and I particularly enjoyed that most recent presentation by Phillip Hankey. I actually learned a few things, so it's really good. That was great, and we are here to present you basically a little bit from where we see the WebRTC assets a moment, and these are the stats. So we are actually not getting to start talking about get stats, but the stats that we see in Chrome mainly, from users who opt-in to provide this data to us. We are bringing up some news on what's coming up in the next quarter, and then Pepe will do basically a deep dive into audio processing pipeline in WebRTC. So, where are we now? In June, we celebrated basically the fifth year of WebRTC. So in June 1st, 2011, our colleague Harald published on an ITF list an announcement that we're releasing WebRTC into open-source world. And we're going to start defining a set of APIs on top of those to grow an ecosystem and unlock basically real-time communication for apps and services. And this is what you see now. This was just announced last week at Chrome Dev Summit, that there are about 2 billion Chrome browsers across Web and mobile out there. And those are all enabled with WebRTC, right? So these are endpoints that are out there that can be used. But additionally, there are hundreds of millions of endpoints from Firefox and from Edge out there. And just recently, a group of open-source people have enabled WebKit with open WebRTC. And that makes personally me very happy because I see basically the open-source community taking up the responsibility on enabling even further endpoints with WebRTC. But those endpoints are not only there being WebRTC enabled, they're being used, right? And those are really active. So this is what we see from the Chrome usage only. So there's about 1 billion combined audio and video minutes per week happening inside of Chrome. And this is around 2,000 years, basically, worth of audio-video communication, which is just happening in Chrome. And this is happening by the services that you create, basically, and by all the users that make use of WebRTC. But it's not only audio and video communications, it's often maybe forgotten. There's all of the data channel, and I think it's very impressive to see that we have basically also per week one petabyte of data, which is being transferred over the data channel. It's very seldom, basically, that I talk about petabytes and network at the same time. So I think this is very impressive and it makes, basically, up 0.1 percent of all HTTP traffic in Chrome. Yeah, which is continuously growing, and I think you showed this after the I.O. How we see, basically, a very nice growth here. Yeah, and these are not being used by one company or one service. So Saar, he is tracking those for us, and there are more than 1,200 projects and companies that make use of WebRTC. And I think this really shows the success of bringing the technology out to open source and enabling and making royalty free from the codex. But every credit basically can build on top of the stack and can be enabled by adding real-time communications or any kind of peer-to-peer traffic to their service. And this is happening globally. It's not happening in a few selected countries. So this is the Google Trends that we see from search for the term WebRTC. And you'll see, basically, that the US is not even in the top five or top seven here. And it's also continuously growing with some outliers in here. But this is basically how we observe how WebRTC is being grown and how the ecosystem around this from a developer perspective is also growing a lot. It's interesting, right? China on first place here, South Korea second, Taiwan third, and yeah, Sweden on fourth, right? That's great. It's not the WebRTC team that's having something running here. So what are the reasons and improvements that I want to just basically outline here that we have invested in and what we have launched basically in the last six months? Just in the last presentation from FIPPO, we have seen something about BWE, bandwidth estimation, video codecs, audio performance that we have been investing in, and some changes and additions in Chrome. So the two seconds is what was needed beginning of this year in Chrome to ramp up to a one megabit per second video stream. And we have switched the algorithm or the mechanism to an send-side-only bandwidth estimation. That means the whole logic for the bandwidth estimation sits on only one side and depends on feedbacks it gets from the other side. And this actually result into a reduction to 650 milliseconds that we are now at for ramping up to one megabit per second. And yeah, it's been used for various services, right? Additionally, we are working on including audio also in the bandwidth estimation and also the headers basically to make it more robust. And we have also improved the competition for TCP streams. So if you happen to be in a meeting that you're very bored and you start watching your YouTube video maybe at the same time, this will not ruin your audio quality anymore. Yeah, and we did what we committed to do, right? So we added H264 in Chrome. We make use of OpenH264 on the end coding side. We continue to use the libFFM pack that has always been in Chrome for decoding side. But I've recently been pinged by folks who've observed basically inside of Chrome that the performance for H264 it needs less computing requirements basically. And the reason for this is that on certain platforms we also have enabled basically hardware codecs to make it more efficient, to not create much heat, to not switch on the fan, if service is going to make use of this. Yeah, but we've not only enabled H264, we've also enabled VP9. And I think video gave a great presentation about this. But what I want to show you here is basically a comparison. So these are two streams that we have here, a VP8 stream with 900 kilobits encoded and a VP9 stream with 650 running at full HD. And I'm playing this video now. And I have this slider here basically where I can slide from left to right. So this is Marco and Jackie. Marco is VP8 at the moment, Jackie VP9. There's no difference, right? And this is great. This is the greatness of one of the great things of VP9. You have 30% less bits, but you have the same quality. And yes, it is more expensive. And we have enabled in software in Chrome. AppRTC demo. AppRTC, I mean, has it enabled as a default codec right now? And we're looking on how to bring it basically into mobile. And it is more complex. It is more expensive on these platforms. But on low resolutions or on low bit rates, it can be run on mobile. But the advantage lies if you want to bring VP9 or video to mobile for low bit rates where you don't have the bandwidth available. You can run VP9 at smaller resolutions. It works fine on mobile. And you can make use basically of this additional efficiency that you have in the encoding and the 30% less bits. Yeah, but it's not only video. It's also audio that we are investing in. So Opus is a preferred audio codec. Yes, Isaac is still there. We work hard on making Opus more efficient and to improve the quality for speech but also for content beyond voice, basically. But an additional focus that we are having is basically to continue to work on bringing it to the ultra low bit rate, as we call it. So at 12 kilobits and below that you can run an audio call maybe on an edge network on mobile or on a Wi-Fi where you have a lot of varying bandwidth happening and they're ready to be able to ramp up the quality. What has just been launched in M54 is a new screen sharing picker. So we added tap sharing to Chrome, which service can make use of. And with this basically we took the opportunity to completely revamp the acts of the picker. And we enabled audio sharing as well. If you want to make use of it, you can basically share the audio from the tap as well. And the picker is now separated into three different tabs. So you have your entire screen, you select your application window, or you can select just a specific tap that you want to share. I think this is also interesting if you don't want to share basically your complete tab list on the top if you have many tabs open. You can test this, there's a URL there. You have to install a custom extension for this. And if you visit this URL basically you can test the tap sharing and the audio sharing. And what has been mentioned already before in the great presentation is that we have added screen capture to Android and this has been just launched or is being launched. But it's already being used. So for those of you who have a pixel phone and don't know how to use it and call the help hotline, you can actually start sharing your screen. So it's part of the pixel launch. Yeah, and not all WebRTC traffic or not all administrators are always happy to open up all their network, right? So especially in an enterprise environment, often UDP is blocked or UDP is limited to a specific set of ports and what has just been enabled is a Chrome policy basically in which you can define this port range and can limit it to a very specific range which the administrator has opened in the firewall and the folks are working at the moment on bringing this into C-Panel to be able to roll this out into a managed corporate network. With this, I would like to hand it over to Justin talking about the upcoming work. All right, thanks, Nicholas. So there's a bunch of stuff that we could really spend a lot of time getting into details on. We could talk all about all the things that Philip brought up in his presentation. Unfortunately, we're not going to cover all that today because I want to make sure we can have time for a pair to get into all the details of how the WebRTC audio stack works. But what I really want to do is talk about some of the top pain points, things that you have come to us and said, you guys really need to fix this. And the status of these issues. So let's get into that. So these are some of the things that have been identified as like the top pain points of people who are deploying applications today with WebRTC and what is really not working. I think for the most part WebRTC is getting toward this sort of mission accomplished, 1.0 is done standpoint, but there's still a few places where things are just not quite there yet. And so one of the places in particular is this case in corporate networks. Nicholas talked about what we could do where port ranges need to be basically restricted because the firewall config on the local network is only allowing WebRTC out of a certain number of ports and we need to make sure WebRTC honors that. So we have that in WebRTC today, but one of the things that we're still seeing missing is cases where enterprises, banks, corporations are not letting any UDP out of the network. They're forcing everything to go through a proxy. And WebRTC will always take some quality hit in this case because they're forcing media to go over TCP and traverse the proxy. But what we know right now is that some of these cases you even have to log into the proxy. And even though web traffic works, the WebRTC traffic doesn't work because right now Chrome doesn't understand how to WebRTC traffic through proxies that require authentication. And this is like a real problem we've heard from many customers where like customers require this to work on their network because they just can't satisfy it. There's a bug right now that's open and has over 100 stars. The reasons for this are deep and complex. I won't get into all the detail, but the basic thing is that the WebStack has its own set of credentials and cache of these sort of things. WebRTC needs to have access to that without polluting the state of the actual Web Network stack. But we have like some work going on here in collaboration between the WebRTC team, Chrome networking team, and we expect we'll have something you can start playing with by the end of this quarter. We're going to talk about this, we're making real progress on it, and look for something that can actually work very soon. Next, media reliability. This is one of the other things we hear time and time again. You know, we say WebRTC is done. It works, it just works for the most part. And we still hear cases where someone says, well, my customer was using this, they were using a Mac, and we didn't get any audio from their mic. We told them to restart their browser, and when they restarted their browser, everything worked. It's good there's really some solution, but that's really not what we want to have happen. We want this to, you know, just work all the time. And it turns out this is like really complicated due to a way Chrome is designed. Where all the interaction with the system media, you know, audio and video subsystems is managed by the Chrome browser process. You know, for those not familiar with the architecture of Chrome, every tab, every website has its own renderer process that does all the layout and drawing of the actual HTML. But then all the interaction with the system is done by this single browser process. The problem is that browser process lives for the entire time Chrome is up. And so, like, if something gets wedged, you know, there's some driver issue, some bad interaction with, like, something in, you know, audio core, whatever core audio leaking resources, the only way to get to a good state is to take down the entire browser. That's kind of frustrating. The other thing is that the browser process does a ton of other things, not just all the media interactions, but anything where it basically has to interact with the OS has to go through this browser process. So there are cases where things can get blocked in the browser process due to some of the other handling that's going on. And for something that's trying to do 30 FPS streaming video, like, that can cause, like, these very small glitches that can lead to renderer lags, or even cases where, like, it causes echo cancellation problems because the timing gets messed up a little bit. So we are going to fix this of a new architecture that Chrome has called Mojo. And Mojo is basically a way for us to take a lot of these stuff that we have for doing specific tasks and moving it out into its own process that Chrome can then spin up on demand. So what we're going to do is take all of our interactions with the audio subsystem, core audio, and then, like, and move that out to its own process, also do the same thing with video device enumeration and capture, move that out to its own process. That means these process will only run when there's actually a WebRTC session going, meaning that that code only is loaded when necessary. These subsystems, we can then bounce them. If, you know, someone says, oh, my tab didn't work, whatever. You know, you can just close the tab and restart it and everything should be in good state. We probably should have many fewer cases where this actually will happen because we don't have this long, long-running, you know, interaction with the system, which is what we believe is the underlying cost of some of these things. And since this stuff is all happening in its own process with its own, like, main thread, the other stuff happening for the browser process will not interfere with the timing of all these critical real-time events that we have for audio and video. And perhaps, best of all, a bad driver with a webcam will not cause the entire browser to explode. So, lots of upside. Downside, well, this is a significant re-engineering. It's going to take some time to get through all this, but we're hoping that this quarter will have the video capture stuff pulled out to its own process and then next year, you know, achieve the same for audio. So, we'll see how quickly we can actually complete this work, but we think this will kind of help us to get from, like, two nines to, like, three or four nines in terms of actual audio and video reliability, which will make a huge difference in, like, us being able to say, whoever see just works. Screen sharing. Experiment. A spreadsheet, et cetera. However, lots and times now, people are trying to share an application, a game, or even share, like, a YouTube video. And basically, you're given two choices at this point in time. You either have a very slow, kind of jerky video where everyone kind of sees on the screen and, like, gets sad about, or you have your fans spin up because you're trying to basically scrape the screen 30 times a second, and the attendant sort of CPU overhead. on Windows, a corresponding one on Mac that's much more optimized. It takes out some of the things that were slowing us down in the old version and basically this will be engaged when we try to set the frame rate for the screen capture to 15 or 30 FPS that will allow things to be much, much more efficient. And this will basically open the door for us doing actual streaming of games. We know there's a lot of people using WebRTC for that, as well as for videos. There's still some work we'll have to do on the actual encoding side for, you know, encoding of screen share to kind of keep up with this. You know, scraping is one part of it. That's the first part we'll work on. Then we'll have some stuff coming to allow us to basically make sure that we have really good image quality for these streams. But we expect to see some significant improvements in the next quarter or so. And lastly, we hear often that people are trying to make WebRTC work on IoT device or some other thing where it says, I just want to do a data channel. I don't want to have, you know, all the video processing stuff because my app doesn't need that. And in order to make WebRTC actually build for my configuration, I had to go there and, like, slash and hack in order to get this thing to actually even compile. And part of the way things are, the way they are, is kind of WebRTC kind of grew up organically. And it was kind of moved into Chrome. And we had all these sort of things where there are some stuff that's kind of interwoven. Well, we have these same needs, too. We need to make WebRTC work in a lot of different places. We're going to kind of chop back some of these dependencies, eliminate the places where we have things that are cyclical, and basically allow for a lot easier customization through our GN built-in fig without having to go and hack and actually change the source code. So this will be a multi-quarter effort. But the things we can expect by the end of this is you can have a build a WebRTC that might be voice only, it might be data channel only, or if you want to have specific codecs removed or add your own codecs through sort of some of the APIs we have similar to our way, we can inject a video codec, you'll be able to do that. So this sort of makes some of the maintenance costs of kind of integrating and customizing WebRTC much lower. So these are all things that we've heard of of like this, like most of the small stuff in WebRTC has been taken care of. Now these are just some of the few remaining big things, and we're making really good progress in these areas, and we expect to have some really tangible stuff to show in the next quarter or two. And with that, I'll turn it on to a deep dive into WebRTC audio for pair. Thanks for that. Yeah, so I'm going to talk about the audio processing development that we do in WebRTC at Google. And it will be maybe not super deep dive, but the deep dive part we can take at the Q&A afterwards. So we write the audio processing algorithms, we maintain the code, handle incoming issues, and maintain the pipeline audio processing pipeline and write a lot of tests. And the work that we do, we do a lot in response to issues that we see with the audio processing both in software, but also in the hardware audio processing that we utilize. But we do the work with the long term improvements in mind. So, so we we are able to do that as well. And I'm going to go through the software audio processing pipeline that we have in WebRTC. And I will go through how we utilize hardware audio processing support that we have available on mobile devices. And I will go through the tuning process that is done in order to tune this hardware processing on mobile devices. And that's quite important to understand that in order to see why we are seeing the issues that we are seeing on with the hardware audio processing support on mobiles, which I will go through after that. And I will go through the solutions that we applied to handle this and the ongoing work that we have on the software audio processing. So this is what our audio processing pipeline looks like. And this is the standard functionality that there is also experimental functionality, but I won't discuss that in this talk. And the the audio processing pipeline basically solely resides inside audio processing module, which is a module inside the WebRTC. And this receives the audio coming from the network from the decoder and and the analyzes that and passes that on to the loudspeaker. So the loudspeaker is the small box up into the right most corner, upper right most corner. And then it receives the audio from the microphone, which is the small box in the lower right most corner. And processes that and passes the processed audio on to the coder, which passes it on to the network. And the functionality that is in place here is is functionality that is required in order to be able to have a successful work call and also functionality that improves the quality of the audio beyond that. And we have basically two type of components, processing component inside the audio processing module, most of them operate in the subband domain in on frequency bands. And then we have some that operate in the full band signal. And in order to provide the signals, the subband signals to the subband processing components, we have these blocks, which do down mixing when when that is needed resampling and band splitting into these frequency bands and then merging of the bands and up mixing whenever needed. And the first of the processing that is being done on the microphone signal is the high pass filter. And the purpose of that is to is to provide a decent signal for the other module to operate on. So for instance, an echo counselor have quite big problems with handling signals with a DC offset. So and the same is the case for the noise suppression. So these modules typically need to take take take care of that anyway. And that's but but in this case, it's taken care of by the high pass filter. And that also have have have the task of removing electrical hum which is picked up by the microphone. And then we have the level control or the gain control, which controls the level of the output of the audio processing module. So the task is to ensure that the output has a decent level. And there are four variants of that. And the the right most box here is the analog adaptive gain control, which is just analog gain, the analog microphone gain. And then we have two variants of the digital adaptive gain control, which are the residing in the in the two left boxes. And they adjust the digital level of the signal. And then there is also another mode which which applies to a fixed gain in a control manner. And then we have the echo counselor. So the purpose of that is to remove any echoes originating from the loudspeaker signal. And that is, and that is being picked up by the microphone. And the echo counselor analyzes the signal going out for rendering by the microphone, and the predicts and removes an echoes. I have the noise suppressor. And the task of the noise suppressor is to reduce the stationary noise to in order to increase the listener comfort and decrease listener fatigue. And then we have a module component called transient suppression, which have the task of removing any sounds originating from key strokes. And finally, we have the output signal analysis component, which provides information about the outgoing audio to other modules inside, we're about to see and that information could be things like signal level and the presence of voice. And whenever on mobile platforms, we try to utilize whatever hardware audio processing functionality that is available. And to where about to see the hardware audio processing is seen as a layer that is in between the loudspeaker and the microphone and the audio processing module. And what we do is that if a certain functionality is available in hardware, we turn off the corresponding functionality inside the software audio processing module. So for instance, if hardware echo cancellation is available, we don't do software echo cancellation. And the reason why we do this is that if the hardware audio processing functionality is properly tuned and optimized, it should provide better functionality since the software audio processing functionality that we have is generic, so it should work on all kinds of hardware. But if you tune for a specific hardware, you typically can get better results. And also, the functionality can be customized to the hardware. For instance, if you have a multi microphone hardware, you can use a multi microphone noise suppression. And potentially, it should have lower battery and CPU usage. And for the echo counselor, it's, it's typically really big advantage of doing that in the hardware layer since there are no render effects in the echo path as seen by the loudspeaker. And that is typically not the case for the software echo counselor. And in order to, in order to, to understand how the hardware audio processing functionality behaves in practice, it's quite important to, to know how it is typically being tuned. So if we have a device, a hardware, a mobile device that is to be tuned, that is typically placed in a silent room, and the software client is installed into the device. And this software client communicates with another client that is located in another room, a control room. And that tuning device also have the capability of playing out audio in the silent room. And picking up and capturing any audio that is present in the silent room. And the silent room is where the device to be tuned is located. And the tuning is done in such a manner so that a number of scenarios are, are created where there are different kind of noise, noises being played out in the silent room. And there are different kind of conversational scenarios with double talk, single talk. And for each of these scenarios, the audio that is received from the device being tuned is analyzed together with the audio that is captured in that room. And based on this analysis, a set of parameters, a new set of parameters for the device is computed. And those parameters are then uploaded to device and the test is the scenario is repeated. And this is done until sufficient quality is achieved. And this is a very time consuming process and it's done typically done manually. And it's important to note that for the void case, there is no standardized tuning client. But a party C mobile can be used and have been used for this. And it's quite important that for for these devices, the tuning is done including the network and the software client. So any kind of active software processing that is done in the client will affect the tuning, which means that the client used is really, really important to get a good tuning. So for instance, if high pass filter is active in the client, that will affect the tuning. And also, another thing with with with the tuning of the hardware parameters is that each feature combination is stored as a separate profile inside the device. So if there are several features, for instance, typically, there are the gain control and you have the echo cancellation noise suppression. All combination of combinations of these need to be stored as a separate profile in the device. And this is quite error prone. So because for instance, if you update the echo cancellation parameters, based on on some tuning, you need to make sure that you update all profiles where where the echo cancellation is active. And that's quite easy to miss. And indeed, we have we are seeing issues with the hardware audio processing support. And it's important to note that this is beyond the control of web RTC. So if we choose to so we can basically choose to use the hardware audio processing functionality or not, but we cannot make it work because that is in hardware. But we are still affected by by any issues that that arise from the hardware audio processing. So the main issues we are seeing are with tuning or related to the tuning of the hardware processing. We see poor noise suppression, poor neon transparency, echo leakage, poor bandwidth and low signal levels. And then we have also issues with broken hardware API support. And this is also solely something which is in that in the hardware and not in web RTC. So for instance, if we try to explicitly to turn on the gain control in hardware, that breaks the hardware echo cancelor. So if you try to turn on the gain controller, the hardware echo cancelor starts leaking echoes. And similarly, we have seen that if we try to turn off the hardware noise suppression, the hardware echo cancelor starts leaking echoes. And we are also seeing issues with silent failing hardware. So we have, we have one case where the echo cancelor permanently stops working suddenly after quite a while having been being fully working. And the only way to get it to work again is to make a software reset. And similarly, we are seeing cases where we sometimes get silent microphone signals. And with web RTC don't get any notice of this happening. And I have some examples. And this in this example, there is this is from a scenario where there was only echo coming from the loudspeaker and no near end signal. So everything though that was picked up by the microphone was echo. And the hardware echo cancelor was active, which meant that if it worked properly, the output of the hardware echo cancelor should be silent or close to close to silent. And what is shown in these figures is that is to the left, we see the spectrogram of the microphone signal and to the right, we see the waveform. And this signal is not silent. And in this case, it was because we tried to turn off the hardware noise suppression. And that causes the cause this device to suddenly start leaking echoes. And what we see here is the leaked echoes. But if we if we have the hardware noise suppression on, this doesn't happen. And we have a silence microphone signal, a silent output of the hardware echo cancelor. I have one more example where the capture level is low. This is also the spectrogram and the waveform from the from the microphone signal. And in this scenario, there was only near end signal present. So no, there was no echo present. And we couldn't get the signal to be the signal to be picked up, the signal being picked up to be stronger to have a higher level than this one. And this is 16 bit integer range. So the figure basically shows what range would be possible. But but we couldn't get more. And we couldn't get a higher level than this. And this has a severe impact of the of a conversation. Because when you send this to the other side, the other side will perceive this audio as being very, very low, which it is digitally. And we tried to address this, of course. So one thing we do is to use allow and deny lists. So basically, we detect where on which platforms the hardware audio processing is, is okay to use. And which platforms, it's not okay. And when it's not okay to use, we, we revert to using the software audio processing instead. But this is really hard to scale. And it's very, these lists are very hard to maintain. Basically, we need to test all devices where we do this. And it actually gives up optimal performance compared to the ideal case, because if the hardware would have been tuned properly, we would have gotten better, we likely got them better quality, audio quality by using the hardware audio processing compared to using the software audio processing, which we need to do in these cases. So we also work with vendors in order to ensure that the devices are properly tuned. One thing we have planned there is an objective evaluation tool based on app RTC, app RTC mobile, which they can use to to simplify the tuning process. And we work on improving the software audio processing to ensure that when we need to, when that when we don't use the hardware audio processing functionality, the difference in quality should be should not be noticeable when we fall back to using the software audio processing. And we have some some ongoing works on improving the software audio processing. And the main work is being currently being done on the echo cancellation. We have the figure here shows shows the major components of an echo cancel or there is a delay estimator, a linear adaptive filter and a nonlinear processor. And we have in place some refinements of the adaptation of the adaptive filter. And we have also robustified the delay estimator. And these two changes lead to more robust echo cancellation behavior. And we have ongoing work of improving the echo removal and the the transparency of the echo canceler. What to do then is that we change the adaptation scheme for the linear adaptive filter. And we also change the way that we compute the suppression gains that are applied in the nonlinear processor. And we also make the interaction between the sub modules inside the echo canceling in canceling more and more integrated so that they speak more together with each other. And probably this will end up with a lower solution that has a lower complexity. And it should be more future proof for upcoming changes to the pipeline. And it should be modular and more easily to maintain. And the related thing that we are working on is gain control. So one thing we have in place is a new digital adaptive gain control mode, which is able to operate on lower level signals. But we're also working on analog adaptive gain control improvements, which will affect the echo canceler performance. And what we are doing there is to ensure that you have a better that we better handle the case when the echo is saturated in the microphone signal. And this has the way that analog adaptive gain control handles this has an impact on on how well the echo canceler performs. And also we are modifying the way that that we detect soft saturation when we have software saturation in the microphone. And we also add more integration between how the analog adaptive gain control behaves together with the echo canceler. And that is really important since the analog adaptive gain controller constitutes a big artificial echo source, a big source of artificial echo pass changes as for the echo canceler. So anything that that does is affects what the echo canceler does. So we are working on echo cancellation and gain control, but we are doing much of the stuff as well. But we won't go through that today.