 Hello. Hey everybody, thank you for coming. We attempted the demo gods here, and they're not playing along. So we're going to get started, but we're going to have to. OK, are you guys going to continue working on this over here? So we can do the slides? OK, so we've got slides. Welcome. Thank you very much for coming. We're going to be talking about VCPE today, virtual customer premise equipment. Let's just. So I'm Bill Bowman with Canonical, Managed Strategy and Content for our cloud marketing team. And with me is Rafael Gonzalez. He's one of our solution architects, a new guy. Hi, I'm Valentino Nario with Palm Grand. That was some introductions. Yeah. So I think we have the most colorful polos of the entire show altogether. So we actually thought we would have three microphones. So it's a little awkward. Bear with us. We'll get through this. So we're going to give an introduction here. I'm going to give you some context as to why is Canonical up here with Plumgrid. Plumgrid could certainly do a VCPE demo on their own, but we didn't feel that that would really give us the context of more of a real world scenario. Let's take these virtual functions. Let's do NFV. Let's do SDN and put it in context of OpenStack. So that's what we're showing here. We're going to do OpenStack, but we're also going to use some of the Canonical software along with that. So of course we have Ubuntu OpenStack running over here to run the demo. But we're also using Juju. And Juju is, I'm going to give you the quick overview of Juju, give you a little introduction in how it works and why we're using that to give you the context of why we're doing that with Plumgrid. So it's about application modeling. And it's about reusable components, reusable operational components. What that gives us is the economies to scale. And when we want to do the scalability to reuse, it requires encapsulation. And we're just going to skip straight through here to what we're talking about when we talk about encapsulation. So Juju, and I apologize, normally we do like 45 minutes on Juju. So this is going to be the really quick version. If you have questions, please feel free to ask. But Juju uses the concept of charms. And charms is where we encapsulate the intelligence here. So it's kind of like having a Debian package or a Docker image, but it's much more intelligent than that. What charms are is they're intelligent programs that go along with your applications to provide things like relations. And relations is how Juju pulls this all together. So what we have here is a Plumgrid charm. Over here we have a Neutron charm. In a full deployment, and Rafael will show you this shortly, you'll actually see tons of charms go in a bundle. But I just want to explain what the charms are and how they work to give you context there. So Rafael, you're going to be my Plumgrid charm. I'll be the Neutron charm. Because we are very charming. Rafael, what are you providing for me over there? Well, so as you said, the charm is an encapsulation, but it's not actually carrying any software, right? So there's no software in the charm itself as in the actual application package. It's more of a description of what the software can do, such as what relations it can establish, such as I know how to talk to Bill, and Bill likes to talk to me, hopefully, right? And then that way we can actually establish a relation, right? So when we talk about the various functions in OpenStack itself, such as the Neutron API, the Plumgrid components as well, they know about each other and they know what they provide and what they require. And so with the combination of these charms, we can then put a set of complex software together. So essentially it's the integration encapsulated. So Rafael's providing for me the Neutron API Plumgrid. It's a explicitly defined relation. And I, the Neutron charm, am consuming that relation. When I consume that relation, we define multiple actions and things. So we actually define the network ports, the network mappings, whatever's required to make Neutron communicate with the Plumgrid ONS charm over here, those are all defined in the operational and challenges of the charm. So that we don't have to manually configure that every time. So when Plumgrid goes out to do a VCPE demo, like we've done here, we use Juju to put these things together and we don't have to do that integration every time that we go through it. And of course you can have multiple different relations established and as long as they have the same name, they will try to communicate. One will be a provider, one will be a consumer. Right, so OpenStack itself is a set of complex services that need to be linked up together, along with an SDN, in this case Plumgrid. And so this takes most of the heavy lifting out of the equation. So getting your environment up and running quickly is the idea here. We're looking to deploying a Plumgrid solution with OpenStack as easy as possible to get up and running and not have to necessarily leverage significant expertise in OpenStack itself to get that running. Yeah, so essentially use more software and hire fewer consultants. And it drives the economic cost of an OpenStack cloud or large solutions down. Plumgrid isn't the only NFV charm out there. We have a whole ecosystem around it. We put some on the slide here just to demonstrate that it's a complete ecosystem and it's growing rapidly. So the other aspect of this, so Juju deploys that software. But the other thing it's doing for us here is it's an open source, generic, virtualized network functions manager. If you're familiar with NFV or VNFs, you need something to manage the VNFs. Most VNFs will have their own VNF manager. There are some other VNF managers, but they may or may not be open to everything since charms can go with any application. They can be written for any VNF. They can be written in any programming language, which allows us to give you a completely generic manager for all of your VNFs. Here we're showing you'd have virtualized infrastructure management. We're using OpenStack for that. We're using Juju for the generic VNF manager. We're not demonstrating orchestration today. That's at the very top. But then you'll also see that Plumgrid is over here in the NFV architecture and is providing both SDN and VNFs to us. So the VNFs are up here and the NFV is down here. So what we're talking about here is the age of big software. Big software is similar to big data. What we mean by that is we've taken software that used to be monolithic or run on a specific device or run on a single server and we started to break it out and it's gotten so big, much like big data, you can't do it on one machine anymore. There isn't one single box that's going to run an entire VNF solution. And when we start to scale like that, we see a phase change here. And it's about the scalability, the topology changes, and how do we manage these things if we start having more and more servers, we don't want more and more people. We need the people to have more and more intelligent software. And that's what we're seeing here with the phase change into big software. All right, so switching gear a little bit and just to give you some context around the SZN piece of this demo, now we'll jump into the demo. So just for those of you that are not familiar with PlumGrid, we provide multi-tenant virtual network infrastructure for OpenStack. We have a new term plugin and we are obviously integrated with all the major distributions, obviously with Ubuntu Canonico. As they describe, we have the ability to be fully automated in a deployment with Canonico. And some of the key things that we provide from a networking functionality perspective is a broad portfolio of what we call virtual network functions or VNFs, from bridging to routing to netting to security analytics and also of interesting things there. We also provide this very strong micro-segmentation functionality and the ability to bring visibility and operations into your SZN layer in OpenStack. One piece of technology that I want to mention which sits under the covers of the PlumGrid solution, it's an open source component that we contributed back into the Linux community. It's now a Linux collaborative project and it's called the Iovizer. And you'll hear me refer to this over and over again throughout the entire 40 minutes because the Iovizer is what makes the whole VCP++ use case that we're gonna discuss today possible. It's how we bring programmability and extensibility to the Linux kernel. It's the ability to define from a high level perspective some programs that do things like bridging or routing or storage or security or something else that might come to your mind. And at runtime, insert those in the kernel. So this is just gonna give you an idea of how we make the magic happen. So as a very important piece of the puzzle. The other key thing that PlumGrid brings to the table is the ability to do what we call service insertion architecture or in an NFV world, service function chaining. So we define these bubbles that we call virtual domains that can be assigned to applications users. And instead of these virtual domains we can chain functions, network functions. We can build what we call topologies. And when your traffic will traverse these topologies it will go through all these services. And those services can be the purple services which is what PlumGrid provides and through the Iovizer itself or it can be third party components, can be any of the VNFs from the canonical ecosystem that Bill mentioned earlier that we can insert very easily inside one of these bubbles here. All right, so this hopefully kind of gives you the idea of what are the pieces underneath? And so we're here to show you obviously more on the VCP side. And Raphael hold this big box all the way here. So Raphael, what's the problem that we're solving today? Well, the challenge with the CPE is that normally you have a complex device that's sitting out somewhere that's necessarily out of your control, right? So how many times has your mom called saying that she can't get online, right? So perhaps not so easy to actually walk her through checking DNS settings or the actual wireless configuration on that access point. And so there are many complex services that actually run in that home router. So what if you could actually separate the control of the device with the actual functions? And that's what we're here to demonstrate today, right? So when we think about the VNFs which are running in that device, they are generally, so there's generally a control that you have to have over those VNFs, right? But the challenge is that you might not have that access to configure that device. And the person that's at the location may not have the expertise or knowledge on how to configure it. Or you may not actually want that very intelligent person at that remote location to hack into that device. So that's the value of the iOvisor with PlumGrid. And the other angle of this is obviously the business angle. If you are a telco and you wanna try to figure out ways to bring new services to your users, right? Having a device that sits somewhere out there that it's really hard to upgrade, it's really hard to add new software to this, it's really hard to add new services, really makes it not a good value proposition. So the whole idea is how do we shift this model of delivering these services to the end user in a way that it really enables new business models as well as obviously a lot of agility from an operational perspective. So new business and operations are obviously the key drivers for this revolution of the CPE model that we see here. So what we see as kind of the natural next step is to take some of the commonly known concepts of separation of control plane and data plane, which you've certainly heard of about, especially in the context of software defined networks. And we're gonna look at how we apply those to the CPE scenario. So obviously what is the definition of control plane and data plane? You have a function, let's take a router for example, right, there is one function which is a data plane which deals with forwarding packets, routing packets, right? And there is a function which is a control plane which is running all the routing protocols. So the whole idea of this separation is that those two functions do not need to coexist on the same physical device and can be split logically and you can have something running on one device, for example, on the cloud control plane and some other functions running in a data plane which can be a software data plane now can be a generic compute node that can perform these functions. So as we move to the application of these concepts to the cloud model and to the CPE model we commonly refer to this as the cloud VCP scenario. And this is kind of what the industry refers to when you talk about the VCP is the whole idea of moving some of the functions that we used to have in that home device over there and pushing those into the cloud, all right? So obviously this has a lot of benefits. You remember earlier we were talking about operations. Now they all sit in the central location, right? And also from an extensibility perspective, those are all software functions now. So it's a lot easier for an operator to start adding things on top of that. But so what's wrong with this model bill? So what we do see here is the model works when everything is in the same data center. But when we start moving these devices out and we're talking about customer premise equipment we've now designed a model where we're going to start passing a lot of traffic out just to come back because we haven't actually addressed what we're doing with that separation of control and data. We're used to having those control and data tightly linked in a LAN or a data center. What we see here is this is not a LAN. The LAN is in the house over there. This is some sort of wide area network and we're now creating a two great of a disparate distance between that control plane and data plane for it to naturally work with a traditional NFV infrastructure type architecture. So while there's obvious benefits in terms of that device in the home is now a much simpler device. It's just running one function which is just providing the layer to connectivity to your home. And you can have all sort of fancy functions there. You now have all this traffic that it's kind of thromboning back and forth. And so how do we go and improve this model? And this is really what the core of the presentation today is and it's how do we propose a model that can help improve and build upon this concept of Cloud VCP and help with some of the challenges that we just explained. All right. So if we go to the next slide, this is what we commonly refer to as the improved VCP model or the Tetris CPE model. So let me quickly, no slowly, sorry, not quickly, slowly walk you through this slide because there's a lot of stuff in there. And let me just bring back one concept that I described earlier, which was the concept of this Iobizer thing. So the whole concept of the Tetris VCP use case is that we want to do is to have rich functions running still in the home device so that you can have an optimal communication path for local devices there without having to actually connect all the way into the cloud while retaining the two key benefits of the Cloud model which was a simplified operational model and a much greater business agility. All right. So what we look at here is to say, well, I want to bring the data plane function all the way into the CPE device itself. And to do that, I need something that allows me to run all these functions on a very simple piece of, no, very simple component. And so the Iobizer, as I mentioned earlier, is this Linux kernel component. So it's something that runs in Linux. So every device that runs Linux, it's capable of having these Iobizer functions in there. So it enables me to have all sort of network functionality like switching, routing, nutting, security, for example, locally deployed in your home device. Now, if you remember, the initial CPE model had these same functions running there. But the challenge was that there was no way to remote control these functions from a central location. So the whole idea with this tether model is that you have the data plane implementation of these functions running in the home device. And you have the control plane of all these components that it's running in the cloud. And because of these extensible architecture that the Iobizer provides, when you, as a Talco, want to start adding new services, you can just deploy one of these VNFs from a cloud perspective. And that will magically appear on the home device. So the whole idea is to kind of bring together the best of the two worlds of the classic CPE design and the cloud CPE design, and merge them into a model where you have all these advanced functionalities running there and the remote control in the cloud. So to illustrate Valentina's point here, when we previously had just the CPE device here, it had all of the intelligence, the management, the operation, and everything. In a traditional NFVI architecture, we would simply separate data and control. But now, if I wanted to implement a virtual firewall, my data would have to go here, talk to the firewall, come back here. And that's the trombone, the back and forth effect. What we're doing here with the tethered model is when we implement the firewall, we actually put it here. But the control plane for it is still here. So we've split up those virtual functions. Essentially, it's like a split VNF. And it gives you a much more elegant solution. And it gives an operations provider the opportunity to generate revenue. And that's the economics here. We can provide new services and generate revenue on a box that traditionally was a necessity, but purely a cost. It wasn't a revenue generating opportunity. I know we're a big road, but if there's any question, you guys jump in and we wanna make sure that it makes sense for all of you guys. Are we clear so far? Not in hand? All right. So we're gonna jump into the live environment in just a sec here. And before we do that, we can just kinda walk you through what you're gonna look at so that you guys can navigate that. Thanks, Valentino. Yeah, let's go over there. Valentino, do you wanna take them through the color coding here? Yeah, so this was just one thing that I wanted to cover. So as you can see, should I jump down as well? Let's do it. It's fun. All right. So as you can see here, we're using two different colors for the different functions. So anything that it's purple, those are the functions that PlumGrid provides natively. So imagine that all those functions are completely distributed within your kernel. And those functions are switching, routing, netting, the SCP, DNS, and these sort of things. Anything that it's orange, those are the third party components that we integrate into our virtual domains abstraction. And those can be anything, can be an advanced stateful layer four to seven firewall, can be deep packet inspection, can be something that takes care of one optimization, can be anything that you might want to use in there, can be a lot balancer, so on and so forth, right? So that's the color coding. So every time you'll see us through the demo, looking at purple icons, those are in kernel components, and then anything that it's orange, those are part of the VNF portfolio that Bill was illustrating earlier that gets integrated into the solution. I was kidding. She made it. So Rafael, how are we doing on our demo here? I think we're good to show here. There's a question. Yes? Great, okay. So the first thing, are you going to put DHCP DNS in the home? Is that correct? I mean, the DHCP server? The question was, are we going to put a virtual DNS in the home? Or in the data center? Oh, yes. Yes. So the idea is that there are certain services that you want to run in the home. So something like a DHCP server, you want it to run locally in that device. And other things like DNS, the functions that you actually need at the device will run within the device, actually within the device. Right, so then my question goes to, if you put the DHCP at home, then your application at the data center, so you have to route traffic back to, I mean to the data center first. So then back to the home again, go to the NAT. So the function is actually running in the device, but the control as to how it was configured is happening in the data center. So we're not actually going to the data center for the DHCP. Yeah, but you can not put all the application data plan at home. Because then, I mean, the bunch of the application will be sitting at the data center side other than home side. It's going to be a mixed bag, right? You're going to have some applications that are running in the home, and those are things like, the basic switching routing, you're going to have your DNS and the ACP functions, you're going to have NAT. Anything that is advanced functions, like for example, if you want to do, state we're following, if you're firewalling, if you want to do, again, as I said, deep packet inspection, those sort of heavy load applications, those will be running in the cloud. Yes, I want to repeat again. So if you put those applications in the cloud. And what applications do you mean, VNFs? VNFs, I mean, yes, then you will have to route all the traffic to the data center first, then back to the home, go through the NAT. No, so NAT, for example, it's running in the home device. So we'll go into the demo. Maybe it's going to make more sense, but functions like NAT, for example, will run inside the CPE device itself. So that means your traffic go back to the data center, back to the home, it goes through the NAT. No, no, it's going to run in the VCP device itself. So that's exactly what you see on the screen here, right? So those functions, those VNFs are running here. So this is my CPE device. Including NAT. Yes, yes, exactly. So what you see there, this topology here, where you see there's a bridge, there's a router, there's NAT, there's the CPE and VNF. It's all running in this little box here, which is our CPE device. Yeah, I'm personally, I work for Channel Mobile. I'm also working on this one. So I'm pretty working on detail of this. So my question in our company we are discussing here is some application, I mean, for the virtual CPE, we are sitting at the data center. For those applications, you route traffic from home to the data center, and you have to route back to the NAT to go through the, so that's kind of run trip, that's really cost. So exactly, that was our point. Right. So you're absolutely right. So if we go back to the slides for a second. Oh, sorry. What do you want to go? Yeah, there. So let's just go back for a sec. So when you have, this is the model you're talking about, right? If you're running your applications in the cloud, you're absolutely right, that the traffic will have, if you have two devices connected to your metro network, if all the functions are running in the cloud, what's gonna happen is that your traffic's gonna go all the way to the data center, where you have your functions, like NATing for example, and then it's gonna go back to the metro network, and that's very inefficient. We're absolutely agreeing with you. So what we're going to show here is instead a model where we push the most commonly needed functions to the home device itself, right? So we completely remove the need to actually go to the data center. The only thing that's running in the data center, which might be a little confusing is, the actual control plane for those functions. Yeah, I heard the point, that makes sense to separate the data and control plane, one in the home side, the other one in the data center side, but you cannot do decouple every application. Absolutely, now. So that's the question goes to... Yes, absolutely. So that's what we're illustrating here, like if you were going to any sort of content provider, like if I was going to Amazon to watch a video, I would have to connect to Amazon's data center, and then it would have to come back to me. So yeah, if we're implementing deep packet inspection, or we're implementing a CDN, a content delivery network, it will have to go to the data center, but it doesn't create multiple trips. It goes directly into the service provider, and the data comes back. If it goes directly to the service provider for deep packet inspection, it's still gonna go out to the internet and come back through, but it doesn't create a double traffic, as opposed to the implementation where I put DPI in the house. Right, so any traffic that has to stay in the home doesn't have to go back and forth to the data center, it's isolated in that device, and any device is attaching to the CPE itself, unless they're having to go get content, as Bill mentioned, the traffic never actually leaves the home. Yeah, so that's... I wanna make sure, sorry, I just wanna give a chance also to the other gentleman there that has been standing for a while. I just wanted to, as we go through the demo, these are obviously at scale environments, so I wanted to make sure that we kind of saw how a stacking of a policy could work, so for example, a global and then a regional and then an individual, because you're gonna have a million, two million, three million users inside of these environments with two, three, four hundred possible variants of configuration, and trying to manage them on a per implementation basis is almost unmanageable, so just as we kind of go through the demo, if you could show features of a kind of policy stacking. Yeah, we can, we'll talk about it, we'll talk about it as we go through the demo. You wanna talk a little bit about your director then? Yeah, so I mean, the whole obviously, when you start looking at extremely large scale, there are all sorts of interesting challenges, right? But one of the things that, you know, I work a lot with customers, so we see some of our partners here as well, so when we jointly work with customers, we obviously start talking about what is your scale requirement and what is the architecture that you wanna leverage for these deployments, and if we talk about open stack specifically, right? There is a very commonly used model for scanning open stack, which is you start looking at what is the size of one cell, and that's usually your confidence level with a failure domain, right? So whether that failure is 250 compute nodes, or it's 500, or it's 1,000, whatever that is, right? That's one cell that you start deploying. Now, the beauty of such a model where you have open stack and where you have a software like Plumgrid running is that both of them provide a consolidated single API layer that you can use for the configuration of the cloud services, right? As well as for each CPE device. So for Plumgrid, each CPE device is this concept of a virtual domain, and these virtual domains can be templatized, first of all. The deployment of these virtual domains can be fully automated, so you can just scrape the creation of it, and it really has no cost to deploy 10 of them or 1,000 of them. Now, obviously, I wouldn't claim that a single open stack plus Plumgrid deployment will go to the level of scale that the global deployment of CPE devices requires, right? So certainly, as these technologies become more widely adopted in an NFV environment, we will see what we call either federations or multi-cell kind of deployments that will help you scale past the single cell limitation. Yeah, so from the NFV infrastructure perspective, we're talking at the infrastructure layer, we're using open stack to manage that. So most people here are probably familiar with scaling open stack and that sort of thing. When you go up to the application layer or the what are we doing layer with the Plumgrid stuff, that's where virtual domains map to your infrastructure and you get that manageability for scalability. There is a central point of configuration for all the policies. So your VCP device, we wouldn't go and configure each VCP device individually. We will have from this central control plane the ability to say, Raffles, CPE device looks like this and it's a template and I'm just gonna kind of, customize IP addresses and whatever it is, right? And build, CPE will look probably exactly the same just with a different IP configuration. And I have the ability to push both of these confects from a central point. So that's kind of the beauty of this central control plane which we fully embrace coupled with distributed data plane. Right, and that's what you see here in the screen, right? The first device in this list here labeled gateway two is this little guy. So as Valentino mentions, you can have templates for these. So if you have many of them, you have gateway X, you have a predetermined configuration for each of those devices and you manage them from the data center. And what the network looks like, at least in our orange box here, we're running OpenStack, right? So here we have the Metro network. This is where our customers will be connecting into and then they come into the data center where we can then do heavier lifting. So if you wanna do DPI, for example, that will be running here in this network or additional routing policy. You have that here and then you can allow that customer then back out to the world. So all of this is running in a data center and that's represented here in our virtual domain view here. And so here is where the CPE is connecting. Yeah, so that's the kind of uplink from the CPE device and then as you can see in this topology here, we have a couple of routing devices. Those are, you know, speaking of SPF and exchanging routing information and we have two third-party VNFs that we have integrated in there. One is actually a VM that is running web server and one is running a routing function. Just to show you, this could be anything, could be as I said, DPI, QS device, could be firewall, could be anything. And then that's your external connectivity out to the one network, to the internet, right? And then if we go down to the CPE device network for a sec, this is what we were looking at earlier. So this is what we're connecting into. So that little port at the very top is what this guy here it's plugged into. It's just a logical representation of your connectivity point. Actually it's the bottom one. The bottom one is the physical and this would be the wireless access on top. Yeah, sorry, the wireless access is the top, yeah. And so that's your DNS and ACP functions that are running there, your NAT function and that's your physical exit point. Okay, please. I guess I see, what's the business point of this of pushing more complexity down to a home device? Like you basically, I can see like vendors like Motorola, Surfboard, Modems in the old days being standardized, but from a consumer perspective, you're just giving me more things to hack around to change my QOS. And then secondly, I think, is it state sponsored telco that wants this stuff or what is it for economies of stale or money? Frankly, I don't want traffic monitoring at my home. I want to run my own DNS. So I guess what, can you give me the telco business case for this in a better sense? Do you want to take it? Yeah, so what we're doing here is we're actually simplifying your device. Okay, so because today you have a lot of functions that are running in the CPE device itself, right? So what we're doing here, it's one where it started analyzing what that device is which we're saying just run Linux can be, any device that is running Linux. Right, and this is the Ubuntu core running here. Okay, and so it simplifies the device itself, so the operational model of that device. So from a telco perspective, instead of me having to, which I'm sure you all guys have done plenty of times, right? Oh, my cable box is not working anymore. Okay, let me go to XYZ vendor store with my box, physically go there, get them to change it, upgrade, blah blah blah. Here, it's a Linux device, can get upgraded much more easily, can operate a lot more easily. So that's the simplification part. Let me just one second on the, just the user and then I'll let you jump in. The other thing that it's important here is that the control plane does not run on the device itself. So you have as the home user, actually don't get to hack more with that. So from a telco perspective, they have this central control plane and your Linux device, all these functions are running in the kernel and do not have access to those functions. Right, so you're talking about, perhaps you're very technical when you actually want that control. So from a telco perspective, they actually wouldn't want that, right? So let's say you're a business customer, we don't, you may not have your IT support, right? So we're gonna do that for you. And by placing all these functions in the kernel, it makes the device very secure. So you can't actually go in there and affect your QOS. If you want better service, then you go back to that service provider and perhaps they have different options for you depending on what services you want. But from a telco perspective, then they have the control. And then the value of placing those functions there also means that, as the gentleman from China Mobile, I think was saying, is that the traffic is not going back and forth to the data center as most other VNFs are designed today. We have the primary functions running in the device at home, giving you that service, but you don't really have access to it. And as far as hacking it, the Linux kernel is very secure today, right? So you're running an Ubuntu Core or perhaps we're now talking to some partners about having Snappy, if you have perhaps heard of Ubuntu Core Snappy, running on this device makes it very secure. And as Valentino was mentioning, also provides that telco service provider the ability to upgrade it as needed without you having to worry about that. Certainly this is the beginning of a very long conversation. I'd really appreciate this very interactive session. Yeah, exactly. We'd love to talk more. And this is certainly to provoke some thought. We wanted to show you kind of the industry standard is this VCP model pushing everything to the cloud. And we wanted to maybe be a little provocative here and say, maybe that's not the only model, right? We have some ideas on how to optimize some of these and use Linux for better deployments. Obviously container is coming up, right? So there's higher density VCP that can be easily accommodated with this very simple model that we're showing here. All the comments and questions are really good and really valid. I think that there are a lot of details of this implementation that we did not go over in the spirit of time to get through this session. We would be happy to discuss it further. There may or may not be things that are more complex. There may or may not be things that are more or less secure. But we do think if we go into the details of how this model is actually implemented when Raphael talked about Ubuntu Core, for example, it's a very secure platform. And the way we do that, we'll talk about that. But those details didn't make it in here. The business use cases are there if you're a telco operator. The business use case might not sound the best to you as an end user, especially an end user that may be used to managing your own device. But that paradigm might be shifting. You could still put in your own management device inside your network and still talk to your outbound gateway. But you will still have some constraints or confines, depending on how the operators implement new connectivity to what you're traditionally contacting to the internet. But they're great points and really good questions. Please, can you just use the... So the customer-prem space, I mean, when we talk about customer-prem equipment, it's pretty price-sensitive. We're talking about putting a general-purpose platform in there, I see that. But, I mean, the optimizations for the customer-prem equipment has happened over years. And the price points have... The price point of what we're talking about does not increase your price point of the customer-prem equipment. And if there is a 10% or a 15% change in the model, the revenue-generating aspect of the model significantly makes up for it. The operational savings. Yes. This is a Raspberry Pi, so you can purchase this device for around $35. And certainly, if you're a telco provider, you can buy these things in bulk. So it actually is a quite inexpensive device that can have very sophisticated functions because you have the value of the Linux platform underneath it. We only have one minute left. Go ahead. No, I'll take this offline. Okay. Come visit us at the Arabus. We're both in the marketplace. Yeah, we actually had one slide. That's our last slide. Come see us. We'll have the demo running there so you can check it out. We'll also. Thank you very much, everyone. We really appreciate your time. Thank you.