 All right, hi everybody. Welcome to Boston my name is Peter pull out. I work for Microsoft and I work on the open stack integration of Microsoft Technologies and the larger open stack ecosystem and I'm here today with my colleague and friend Alexander Pilati of Cod Bay Solutions. Yep, and we're here to discuss the latest and greatest for this release cycle for the windows platform and open stack. All right, so we will start with killing these things here. Just a second is fantastic. We try to look good at all times. Anyway, a little bit of history. This is actually I'll take over a little bit history. The open stack on windows project is pretty much run here out of Boston are continuous integration infrastructure for all the Microsoft technologies is Roughly about what five. Six blocks over the river that way. So yeah, this is our home and you know welcome for being here. My first open stack summit was Boston. The original Boston summit. So we've been working to bring Windows technologies into the forefront of open stack for quite some time. And if my friend Alexander here can ever There we go. All right. Okay, let's resume the slides. Okay, here we go. All right, so our goal from the get go has always been to ensure that Windows is a first class citizen, both as a guest and as a functional component and open stack. And you know what we're here to do and sort of prove to you today is that, you know, Windows is actually a first class citizen and there's really no difference in managing your windows instances and open stack as there is to managing your Linux instances. So our goal, as I said, has always been to make sure that your open stack experience with Windows is an open stack experience. So we tend to plug in seamlessly with all the necessary sort of layers that one would and practices that one would typically use for managing their open stack infrastructure. Now we're we go ahead. So how many of you guys are running Windows instances in open stack today? Yeah, yeah. All right. And how many of you are using cloud based in it? Good. So cloud based in it is the factor standard today in deploying Windows instances in open stack. Okay. It's modeled conceptually, let's say like cloud need for Linux. So the main idea here is to make sure that when people deploy an experience, let's say the usage of Windows instances in an open stack context, they have a very familiar framework. Okay. So it's have to be something that is not something completely independent as somehow fits, you know, battling side of a completely different type of framework. But it's something that has to be very natural, both coming from a from an open stack direction. So from, for example, Linux experience for cloud need and everything, and also from a from a Windows direction. So coming from a type of Windows, Windows server re-entercise admin model. And that's that's how actually we work cloud based in it. It's fully Python code. So it fully looks and feels like like like an open stack project basically from that perspective. It's wrapped in a Windows service. So if you are a Windows is admin, you will just be able to to see it as a normal as a normal service. It works on any supported Windows version, including nano server. It has a very extendable model based on plugins and supports basically every possible cloud. Open stack is of course the main one easy to Azure Azure stack cloud stack open nebula mass GC Oracle cloud and so on. Okay. So the main idea is that you could even create a single image and running tonight with cloud and cloud business will take care of everything for you. Here is a very limited list of the things that cloud business need can do for you. Okay. A lot of those things are new from from the attack. Okay. A life cycle and and of course also extending into the pike one. So what what can you do setting host name, creating username and passwords, managing of course your admin password, setting static networking, even on ironic, for example, to connect to the previous session, managing public keys, automatic volume expansion, you know, when you boot different flavors, configuring the HTTP listeners, configuring the time service, automatic updates, sound policy, licensing. So for example, you can move your VM from Prem to cloud. Okay. And get and get already your licensing configured if something that you need. Okay. Setting NTP services, setting the right MTU for open V switch and of course running every type of user data scripts that you might need. More details, there is a full documentation link that you can see there. Okay. So this just like scratching the surface of what club club is in it can do for you. How do we ensure that testing works testing is fully based on a continuous integration framework based on tempest, which is based on Argos and Arrester, which are names coming from the Greek mythology, that we use to test cloud business needs on every possible Windows version. So imagine when you have to test every single patch that comes in on Windows 7, 8, 8, 1 and 10 twice because it's x86 and 64. And then again, Windows Server 2008, 2008, 2002, 2012, 2002, 2016 and even nano server, which has a completely different, you know, surface from this perspective. So this actually the tools that we run, which we run also part of the Cambridge facility, right Peter? Yep. So, yeah, as Alexander said, we take the approach of basically scientifically proving the value of the work that we do through our testing, right? So we assure you that this is all extremely high quality. These aren't toys. This is all enterprise worthy and worthy of being consumed in your open stack deployment to make sure that your Windows instances operate as sort of best as they possibly can. And as Al's Andrew said, we try to do that for every supported release of Windows that Microsoft currently has available today. Before you ask, yes, it ran so some Windows XP in 2003 and no, we don't support XP in 2003. So if you want to run it, it works. We provide it, let's say, as consulting services, but we don't have intention to support this as an upstream feature for the simple reason that it's also not supported by Microsoft, right? Out of those projects, for example, RESTor, which is the project that simulates all the possible cloud metadata API interfaces, that's new actually in the last cycle, okay? It's all, of course, open source and all available on GitHub. This is all news here. There is a Windows Server open stack evaluation image, right Peter? Yep, that's currently available for open stack testing only. We were able to get a special license out of our legal team to allow for that. So I give you, please take it, try it, test it out in your open stack environment if you want prior to baking and rolling your own images. You might be curious to know how many people are actually using cloud-based in it, you know? And of course, like with every open source project, it's quite difficult to determine the real usage, okay? So we have some basic statistics, basic downloads, update requests and everything. And I can tell you that since we started checking, let's say, the amount of instances and cloud-based in it runs that we were collecting, from late 2015 till today, there are 8 million of them, okay? So whoever is telling you that open stack is not real or is not really getting into enterprise, and to people which are saying that open stack is not really Windows-friendly, well, 8 million instances to me seems quite a big number, right? And once again, he did say since mid-last year. So it's quite huge. How to build Windows images? This is also something that everybody is always asking. And again, Windows instances that can run on every possible hypervisor, right? Not only on Hyper-V. Most people, of course, ask about KVM, which is, of course, the most popular platform for open stack, but it works also on VMware, on Xenseraver, and so on. So here are a bunch of tools that we keep on updating. There are, of course, a lot of updates in the last cycle, even there. They support for Nano-Server and so on. Okay, and it's likely deeper in repository, but let's say this is the main one. And you can basically run tools that will generate the images for you. Basically, they generate an offline Windows image, and then they run it in a Windows machine to apply all the Windows updates, okay? Plus additional scripts that you might have. It includes Virtua.io or VMware tools for whatever drivers you need. Windows updates, custom drivers, custom applications, and so on. Another very common question is how do you run custom user data? You can run partial, basically, CMD, meaning old school, you know, common-prone batch files, batch or Python. In order to determine the content of your user data, it's very simple. The first line will start with PoundPS1, okay? That's pretty simple to the model that you might be familiar with, with, of course, with Linux, right? So actually, if it starts with a typical shebang, then we know that it's a batch script, okay? We also support the syntax that the EC2 folks use, which is simply a PowerShell tag, and then the scripts end, okay? So both of them are supported, meaning that you can have the same identical user data script work in an Amazon EC2 and also on cloud-based init. Here are just two very simple examples of things you can do, you know, just creating users, assigning local groups, domain joins, and stuff like that. If you get more complex, of course, you can run everything inside of a user data script, and, of course, you have only the limitation of the size of the user data that you can have in metadata, but, of course, we accept also gzip metadata, so this means that it's fairly unlimited amount of data that you can run. But if you're serious about, let's say, orchestration, I would strongly recommend to use something like hit templates, do-do over there, okay? So here there are some good examples about how to create an active directory controller. They are all upstream. I think I can even open it here. Let's see if I can even go here. So you see that there is a typical YAML file on a hit template. So here they are the resources about how to, you know, create a computer resource and so on. And at the bottom, you will find also the scripts that are being executed. Get back. Okay. And here you have a PSM1, so a PowerShell module, and you have also a script that will perform actually the actual configuration. It's here. We are importing the module, and we are simply saying, hey, install active directory domain controller and performing the rest of the operations that we need to do. And here there is a bunch of PowerShell that will show you how to, no, sorry, the other one. There you go. And here there is a bunch of PowerShell that will show you how to actually perform the work. And how to synchronize with the remaining system. Okay. Let's go back to the templates. There is another example for SQL Server. But, you know, at that point you can do really whatever you want there. So we get a lot of questions around Windows licensing in OpenStack. And, you know, it's basically the same as licensing any other Windows instance, right, in your environment. We typically, you know, recommend people if they're going to be deploying Windows Server as a component of the underlying OpenStack infrastructure that you use data center licensing. And in fact, it's probably your best solution in all cases. And if you're going to be hosting other people's content on Windows Server, then you will need the SPLA licensing set. You know, obviously this works regardless of, you know, the Windows licensing is really kind of regardless of hypervisor. And it can be extremely cost effective when choosing, once again, the right licensing model for your deployment. Such as, like, you know, whether you're a volume license customer or SPLA customer. Now, you know, we also get questions, does Microsoft support, you know, Windows or OpenStack and Windows? And, well, the reality of the situation is that Microsoft will support Windows guests when they run on Hyper-V specifically, or they support Windows guests when they're using a certified VertIO driver stack, which today, in order to get a certified VertIO driver stack, you have to be using an Enterprise Linux from one of the vendors you see on that list. And you will get that driver stack directly from that vendor, okay. It is not the Fedora VertIO driver stack that you are probably familiar with that's available for free. That, if you decide to use that Fedora driver stack in your Windows instance, you essentially render that Windows instance unsupportable from a Microsoft legalized perspective. How many of you guys are using the VertIO upstream drivers? So they work perfectly fine. The only issue, remember, is that Microsoft won't support them, okay. So that's the only thing to remember. Of course, with Hyper-V, you don't have that limitation, meaning that the stack is anyway supported. But the alternative is to go with as Peter was saying. Once again, if you're already paying a customer support subscription for your Linux distribution, you might already have access to those drivers already. So now, what happens if you need help directly from Microsoft? Well, you can reach out to our team, our OpenStack team app, Microsoft, by emailing OpenStackapp, Microsoft.com. And that will come directly to myself and other individuals on our team. And we can try to direct you in the best possible way to get your assistance. Okay. Now, let's move to the outside of things. So OpenStack plus Hyper-V, we came, let's say, a pretty long way. Around that, we started in Folsom, right, Peter, for that. And well, we did quite a lot of work. It started a little longer before that. But essentially, since 2010, there's been a Hyper-V driver. So shortly after the original release of OpenStack, with the current iteration, as Alziter said, has been since 2012. Yeah. All right. Some unique Hyper-V plus OpenStack features. So just to make it clear, you can run Windows gas perfectly fine in OpenStack on KVM, okay? There are, of course, some advantages in using Hyper-V as a hypervisor. Now, we live in a world in which hypervisors, of course, nowadays are very commoditized. So it's not that it's the end of the world. If you're not using Hyper-V, use KVM for you running Windows gas. Actually, as Peter was saying before, they are perfectly supported. But there are some specific advantages that you may find useful for Windows, no? So one of them is Windows failover clustering. So people are using, asking, hey, what can I do with my virtual machines, which are leveraged, for example, anytime of host level, high availability, okay? Like VMware vMotion or Windows Server failover clustering, and they want to move them to OpenStack, which traditionally is a platform more oriented at a cattle compared to pets, no? And the good thing is that we have also a driver for the Windows failover clustering, okay? So hopefully, we'll be able to demo it to you pretty soon. Plus, we have remote effects. So if you're running VDI workloads, that's definitely the right thing to do. And there is a new feature, again, okay, that time frame, which is shielded VMs that basically allows you to do full VM encryption and isolation and also having a completely independent security model, to say so, from the underlying host. So even if somebody takes control of the underlying host, your VMs will anyway be safe, okay? There is quite a lot to discuss about this topic, so we wouldn't be able to introduce it only in a short session like this one. But please feel free to come to our booth or to ask at OpenStack at Microsoft.com, as Peter was mentioning. And trust me that that's a feature that really changes the way in which you can consider security for virtual machine in the computing world, no? Now, some people like to ask us, you know, well, who supports using Windows and Hyper-V inside of OpenStack? Well, these are some of the ones that support. We've been working with all major OpenStack vendors from a platform perspective pretty much for the last four plus years. You know, we've often been told, you know, pretty much if you look at the ecosystem, the Windows guest is the only guest platform that's essentially supported by all those vendors, because in some cases they don't support each other. So that leads me to believe that we've got one of the best supported guest platforms on top of any OpenStack distribution today. So we as cloud-based partner with all the names that you see there. So if you want to deploy and then Hyper-V nodes to an existing OpenStack deployment, it's designed in a way in which you can just add the node and make it work, okay? Hyper-C. Hyper-C is a fully converged Windows Server in OpenStack-powered cloud infrastructure. So we use the best things that come out of Windows Server and the best thing that we develop upstream in OpenStack, you know? So Nano Server 2016, storage spaces direct, scale-out file server, Hyper-V, of course. Windows Fillover cluster, OpenV switch, and so on. How many of you guys are running Hyper-V? I forgot to ask you. All right, good. Nice. Thank you. So one thing that we did was to port OpenV switch on Windows. So OpenV switch is, of course, let's call it the lingua franca for software-defined networking in OpenStack. And we knew that it was very difficult for Hyper-V and Windows Server to have a future in OpenStack without porting this. So we made an effort. We worked together upstream in the OBS project with the folks at VMware and the OBS community in general, and we are extremely happy about the results that came out from there. Great interoperability. It works. You can have a KVM and Hyper-V nodes in the same identical cloud. All tunneling supports VXLAN, GRE, STT, GNEV, and so on. The same ML2 OBS agent that we have and also very important, we support OVN, Open Daylight, NSX, and let's say every modern type of controller. Just to be clear, that's KVM, Hyper-V, side-by-side, without any extra really configuration on your part from a networking perspective and delivering a 100% seamless interoperable network for your guests. All right. Okay. Demo time. So what you have here is a fully hyper-converged OpenStack running on top of this Nuke. So those are Intel Nuke's here. We usually keep them at the boot, so this time we decided, hey, let's try to see if we can throw them on stage and make it work right away, okay? We encourage anybody who wants a closer look to move forward and take a peek. So the idea is that those are four hyper-converged nodes. So you have compute, networking, storage, and everything on them, running, of course, Hyper-V, Storage Spaces Direct, OpenV Switch, and so on. They don't have IPMI because they are, as I was saying, the six-generation Intel Nuke doesn't have an MTV Pro model. So what we have here is what you normally would do to simulate an IPMI on a regular machine, which is pressing a button and releasing a button. And since we didn't have any other simple way to do it, we used Lego with EV3 Robotics. And with a custom firmware, of course, and there are some motors under here that will simply press the button and automate the full thing, okay? Then if you want to see more, you can come later and have a booth to see it. There is a controller here. It is running Ubuntu with Ubuntu Mass, which is performing the bare-metal deployment of all the nodes, the only automated deployment of the entire OpenStack. So this simulates what normally happens in the real data center, in which you have, of course, your Mass node. You must control your deploying, of course, the entire nodes inside of the stack. So you don't have to install anything managed. So if you think about the old-school way of installing Windows Server, in which you had to install next, next, and so on, forget about it. It's everything entirely automated. Okay, I guess I can connect to it. So those are the Ubuntu Mass nodes. So you can see those are the four nukes and the rest are a bunch of other machines running on the controller, okay? What else? Here I have my OpenStack. Let's see if I do an overlist. I should have everything. Yep. Now let's do some demos. Well, first thing I want to show you, I'm just RDPing into one of those machines. Almost. I have no networking. Demo effects. Let's see if we have them. So if you guys are familiar with Windows Server for lower cluster, this is actually where we have the cluster manager just showing that we are connected there, no? Now. And this is just one of the many compute nodes that we have. Many. Four. And I'm just RDPing into it, okay? Now let's get back to the demo. Just fetching a network, which is called Private VXLAN, and just doing an overboot. Overlist. Okay, so I'm just watching my novelist. Spawning. In a few seconds it will be up and running. So what's happening is that the scheduler choose one of those nodes with all storage and compute networking and some nice handle there, no? A bit of suspense. General. In the meantime, what I can show you, if I do here, I'm on a compute node, on one of the compute nodes. I just did an OVS VSC show, exactly like you will do on any, let's say Windows Linux machine, and you can see all the flows and all the configurations that we have there, okay? So all the flows, all the configurations that we have, including all the tunnels and everything. Okay, machine is up and running. So I should be able to go here, and you see that it's here. So what's happening here is that the novel compute driver coordinates with the cluster, so that the cluster knows that the machine exists. Now, if I... Okay, machine running. If I migrate it, for example, well, before I migrated it, I'm assigning a floating IP, and I'm just RDPing into it from another node, from another, sorry, from another shell, but you can see that the machine is here, and I can ping, no? So I leave the pig running. So now this machine is running on the nuke number four. So if I go here, I do a getVM, I can see that the machine is running, no? And even here, it tells me the cluster that it's on nuke number four. So I'm live migrating it to the first one. So now on the cluster, I should be able to see that the live migration started, and as you can see, the novel driver is synchronizing with the cluster to tell it to go to the other side. And voila, it's running on the first one. So novel show, it tells me that it's on VM1. As you can see, the ping didn't care, it's still running, no? So all this happened automatically without any interruption for the user. Now, let's see what's happening if something more brutal happens, like a full failover. So let's migrate it back to nuke zero four. And the next thing that we're going to do is to brutally shut down the node where it's actually is hosted. Can we spell the name? That's right. You change nuke four, and now nuke four is with capital. Oh, now let's see where it's running. Okay, nuke one. Okay, I'm migrating to nuke four. I misspelled the name of the node, so obviously it didn't migrate. Okay, it's on four, and now what I can do, so here it will tell me that it's active, I can go back on mass and tell mass to brutally power off that machine, okay? So it will talk to the Lego nodes. Hear it? It's pressing the button, releasing the button now, and the machine is dead, no? What's happening is you can see that the virtual machine already recognized that they, sorry, the cluster recognized that the node is dead, and the machine has been live-migrated automatically on the other side. So me, as a user, I'm able to simply reconnect to the machine and be able to access it, no? Because if I look at a novelist here, the machine is happening up and running on VM one, no? Here you go, nuke zero two. Here you go. If you want to see more of these things running, you can come to our booth and show them, okay? So this is a big advantage, meaning all your workloads even if they don't have an application level failover. They work just also based on a host level failover. So essentially what we're getting at here is with Microsoft technologies and OpenStack, we can basically suit the needs of your pets and your cattle. Yeah. Okay, time to resume our slides. If you just want to see how OpenStack works on Hyper-V, there is another open source project called Vimagine that runs on everything running Hyper-V. So even if you have a Windows 8, 8.1, or 10 laptop, it even runs, for example, on a surface or anywhere, or just a Windows server running around, you can just run it and you will get a one node OpenStack fully functional on Hyper-V, okay? It's extremely simple to use. It's just like an X-Nex experience. So the ideal thing for people having to learn how OpenStack works. So we also got a lot of questions about the performance of Windows and OpenStack. And over the last two years, we've spent a significant amount of time ensuring that we can perform as well as Linux in an OpenStack ecosystem. What these fancy colors and numbers show you are the results of the rally test that we did for an Apples to Apples comparison of both KVM with Linux workloads as well as Windows Server 2012 R2, as well as Windows Server 2016 with both Linux workloads. And as you can see by just the nature of the colors, Windows Server 2016 basically slightly outperforms KVM for Linux workloads in an Apples to Apples tempest comparison. There is a series of blog posts on cloudbase.it where we explain how all this works, all the tests that have been performed with rally, so open source or classic OpenStack tools. The last ones are particularly interesting because they are based on a Hadoop cluster. So real enterprise workloads on this scenario. What we're trying to show here is that Windows is a viable alternative as your hypervisor platform. So if you have those licenses, there might be a reason to use it. We already talked about the continuous integration, or you want to add something more now? Yeah, well, basically, you know, we have about 13 different drivers that we currently test in our continuous integration today. You know, we're on, from an ongoing perspective, we're constantly advancing that and adding new technologies to it as we have time and have need to. So, you know, our goal is to assure that, you know, Windows is a viable alternative to Linux in your OpenStack ecosystem. And the best way we can do that is to assure that we're as well tested as everything else. You know, I would like it if you go and take a look for yourself on Stack Analytics and you can easily look for the cloud-based solutions under CI Boats and see how we compare to the others, but I would guarantee you you're going to be extremely surprised. So, from that perspective. So what's coming next? I mean, how do we plan to deploy Windows components going forward? Well, we like a lot the color project, I think, which is brilliant. And we are basically containerizing all the Windows bits as well. So, Windows storage, Windows compute, of course, as in Nova compute, and networking for VS, for the Microsoft network controller stack, and so on. So, and we will have a unified deploying model based on color. There are already patches, which are currently in the process of being reviewed and merged for an initial release, and then we will continue adding more, okay? There is also a blog post available to get an initial idea. If you look at the modern type of Windows workloads on OpenStack, Azure Service Fabric, it's probably what you're looking for. It's working perfectly fine on OpenStack as well. Fully automated, including auto-scaling and everything else. Docker is, of course, one of the big things that came with Windows Server 2016, and before, of course, on Windows 10 as well. And lots of people are containerizing their traditional applications and new applications in the Windows world. Think about ASP.NET and so on. So, of course, we are going to help a lot in that direction as well. On OpenStack and on Kubernetes. So, at Google, next, there was, of course, a presentation together with Apprenda, in which Apprenda showed actually a full Kubernetes deploy using our OpenStack, sorry, our OpenV-switch and OVN model, okay? So, we are now in the process of making a CNI driver for that and the preferred networking model for Windows containers in Kubernetes will be, of course, OVN and OVS. We have a session about VDI, if you are curious, including a fully open-source VDI project that's going to be tomorrow, Wednesday. Thank you. Here we have actually a co-speaker. I have something like five or six sessions this time, so I completely messed up the order. Okay, Peter, I want to say something about... So, for, you know, we got a lot of questions about OpenStack, AzureStack, and SCVMM and WAP. And, you know, traditionally, those are really suited for different purposes and, you know, from where... The work that we do here at OpenStack, we don't really see it as competitive. And, you know, and this is a quote from my employers. AzureStack and WAP have a different purpose, right? WAP is a great solution to build I, as an AzureStack is great platform to run Azure services in your data center, right? And we look at OpenStack as being the solution if you want to have open-source, open-cloud technologies and soft technologies alongside, right? So, you know, from that perspective, you know, OpenStack does bring the standard open-source cloud I as APIs, right? And AzureStack isn't only about IAs, but it brings the Azure services experience into your data center and into, you know, your private cloud space. Okay, I think we can wrap up with one last slide. So, again, this is not a comparison between apples and apples, okay? Those are really apples, oranges, and bananas. So, don't think that you can put AzureStack and OpenStack in the same comparison in a sentence, okay? Because they are completely different things as Peter just said. You get an idea of also the pricing model there. So, AzureStack will have a pay-as-you-go model based, of course, on Microsoft announcements, okay? The hardware will be quite hefty in order to deploy it, okay? So, you have to consider the fact that in order to have OpenStack deployments, at least based on what it's known publicly at the moment, and I put a bunch of links there about what the sources that are publicly available, not necessarily from Microsoft. So, again, this is not, of course, a statement from Microsoft. You will have to have a significant investment in hardware. While for OpenStack, you can start as small as you want. I mean, as you can see, you can have even a small proof of concepts multi-node on just some simple nukes. Lots of people are asking about how does that compare also with a system center, and again, even here, there is no comparison. System center, at least how our customers typically see it, is more like as an alternative to VMware, okay? So, not a real cloud alternative, not even with the web portal on top. So, really, you have three completely different spaces. One where OpenStack excels, one where AzureStack excels, and one where System Center excels, okay? So, we have customers, and quite a lot, that are expecting to deploy both OpenStack and AzureStack in the future. Of course, they can also communicate among each other, okay, migrate VMs among them, okay? But most important, don't think that they are mutually exclusive. They are perfectly valid choices that you can have at the same time in your data center. I think we can wrap up here. So, as we said earlier, if you have any questions for myself and want to see it, come from the Microsoft email address. Feel free to email OpenStack at Microsoft.com, and you will get in touch with myself and the rest of our team. And then for cloud-based solutions, feel free to reach out to check their website or come see us at the booth for the rest of the week. Yeah, we are at the booth, of course, the cloud-based booth, and there is also an ask.cloudbase.it, where people usually ask questions related to our product or OpenStack involvement and everything, and of course, the upstream ask.openstack.org. If anybody has any questions and preference of time and to not step on anybody's toes, we'd be more than happy to answer your questions on the floor later. Or now. Go ahead. Yeah, I got a question. I'm an OpenStack newbie. We've had it for about a month, so a good portion of that time has been spent trying to get images presented to the environment. I've stumbled across cloud-based tools, and I've been able to successfully use them to create 2016 standard and nano-images. Nice. That core is a totally different story. I'm curious if you guys have support for core image generation? We don't have a core, definitely, and that's what we do most of the time, actually. Yeah, I keep getting the boot-res DLL error, but I think I'll follow up. Okay, did you ask a question on ask.cloudbase.it? No, I won't. I suggest you come down to the booth and we'll have you talk to an engineer. Okay, thank you. Awesome. Anybody else questions? Then we'll see you all later and have a great week here in Boston. Thank you.