 I have a feeling that was a little loud. So welcome to the session on CORD. So if you've been in this room in the overpacked room for the last couple of hours, you've been hearing a lot about both on the vendor side and on the carrier side, what are we going to do about network function virtualization and SDN and how the cloud and all these things come together. And there's a lot of moving parts and people trying to figure out how all the pieces fit together. This is in that space, but we've taken another tack. And that is, we're just plowing straight ahead. And so we're looking at the central office, which is the very edge of the telco network. So in case, just as a little bit of background, this is not back in the data center. This is out at the very edge. So there's a large number of facilities out at the edge in a typical North American carrier network. So AT&T, for example, operates 4,700 central offices. These are the windowless buildings in your neighborhood all throughout the country. Each one of them then serves somewhere between 10 and 100,000 users. It's a combination of enterprise, residential, and mobile. Now, this facility has evolved in the last 40 or 50 years in an extremely ad hoc way. So there's really very little architectural integrity. It's just buy equipment and stack it up. If you've been in a central office, it will look like a telco museum. There's hardware going back for years. This is a huge source of CAPEX and OPEX. There's no doubt about it. It's just the sheer numbers. You might think you go into the backbone, and that's where the expensive stuff is, but there's only so many sites. It's the 4,700 sites at the edge that are the real expense. It's also because of the arcane hardware that's been accumulated over decades, it's very hard to introduce new services. So this is the problem the telcos are facing. And just to rub salt in the wounds, they look at the over-top carriers or the over-top providers like Google and Amazon, and they see how agile they are, and they see how they're taking advantage of commodity. So this is motivating. Cord is basically aiming at that problem on behalf of telcos. We want to give them the economies of the data center, and that is infrastructure built with as few commodity components as possible, and open source white box switches. So what I'm going to describe is an extremely green field approach. So keep that in mind. It's green field, but it's one that you can start and then grow. And then, of course, the agility of a cloud provider. So agility is just as important as the economies. Now, I mentioned AT&T as an example, and that's not just for illustrative purposes. This is something AT&T is committed to doing. So we're on target. Like I said, we're plowing ahead. We're hoping to deliver this in early 2016. And so what I'm going to do is describe the pieces that go into this. And it's bringing together a lot of parts. So the way that I think about it is that we're going from the legacy system to something that's built out of commodity. And we're going to do that by sprinkling the pixie dust of these acronyms on it. But really, the way to think about it is what we're after is software as a service. That's the way you compete, or the telcos compete with the cloud providers. They get to the point that everything looks like software as a service. And we're going to start at the very, very beginning, which is fiber to the home as a service. So we're going to tackle the problem at its very beginning. So back to this picture. I'm going to get to what we're going to do in just a second. But just for a moment, let me talk about what the legacy looks like. So there's a lot of through letter backernyms. We're going into the home. We have the CPE, of course. We're going to use, we're going to do this for GPON gig bit passive optical networking. We're going to have an optical line terminator in the central office. There's probably some ethernet aggregation switches that go on. And then B and G is the tall pole in the tent with a lot of functionality embedded in there. So we're basically going to replace the red boxes with commodity, and then run software to control it. So back in June, we did a proof of concept. And this will just give you a high level sort of vision of the pieces. I'm going to go into detail into this and how we're going to go forward with it. So in the home, we replace a perhaps complicated CPE with the simplest possible device that we can. In our proof of concept, we just use a very minimal net gearbox. And of course, in both of our laptop and our television set, running AT&T U-Verse are connected to that. We go then over the passive optical network into the central office slice data center where we have commodity components. And then we have the piece that I'm going to describe, which is a combination of OpenStack, Onos, and XOS running on top of that within this set of services. And what you'll see is the virtualized version of the OLT, the virtualized version of the CPE, and so on. I'm going to add the virtualized version of the CDN just as an example of another service you could run above and beyond the access. So that was the proof of concept. What we're pulling together right now, by the way, I should have said, Open Networking Lab is a non-profit sister organization of the Open Networking Foundation. They do standards. We do open source software. So we're building this a reference implementation of CORD. You can think of it as an open virtualized service delivery platform that, of course, delivers on the promise. And it's to be deployed in AT&T's. Some of their central office is just as a market trial in early 2016. So now we're thinking in terms of a pod. The pod gets put into the central office, serves a few customers, and then you grow from there. So what is the pod? It has a hardware blueprint, build materials, some assembly instructions, and so on. And it has open source software. There's going to be some core components that I'll walk through, and then we're going to populate it with a set of services. We're going to target G-Pond initially, but it's generalizable to other services as well. So let's start with the hardware. Again, this is just an illustration, but basically it looks like a data center. What's interesting about it, full of commodity this and commodity that, what's interesting about it, first of all, is the leaf spine fabric at the backbone. So we're optimizing for moving packets from east to west, not north to south. This is a typical data center. And secondly, in addition to racks of 1U servers, we have racks of 1U NICs. Basically, if you think of the data center as a giant computer, this is the NIC of that computer. These are basically 1U blades that have 48 G-Pond ports. Today, it's 1 gigabit. Tomorrow, it could be 10. And it's basically just an interface that goes out to all the homes. It's been stripped of all the control functionality. It's just the Mac chip, basically. But you can also think of it as kind of like a switch. It's a 48 port switch with one port going up to the top of rack. It's also open flow controlled, as are all of the switches in here. By the way, I won't say it again, but every time you see one of these, it's white box. This is not an existing switch from a vendor. This is purely white box. So we get racks of these, and we get racks of processors, and we connect it all together with a leaf spine fabric. So let me look just quickly at what's in here. Again, we're provisioning a very specific system as a reference implementation. So we're going to act on to get our white box switches, and we've sized it for the trial. And so I won't go through all the details, but we're basically doing a lot of 40 gig ports. And this is both for the leaf switches, which are top of rack, and the spine switches that sit above it. And if you look on this hardware, what you'll find is basically open compute project software stack. And so it's a Broadcom chip, but then on top of that, there's a hardware abstraction layer, and then there is an open flow agent. So everything here is open source. Well, right down to the Broadcom stuff. So that's the hardware. Everything's commodity. Everything's white box. So here's the software architecture. The way I want you to think about it is it's a pod, and you don't have to see inside of it. It looks like a device. It just happens to be a device that's internally connected to the access, connected upstream, and offers you an interface. It's a Rustful API. What we're going to give it is a high level declarative specification of the policy. We want that box to implement for us. Right now, we're working in Tosca. We're going to add Yang, because that's what the carriers want. Tosca is proving to be perfectly good. And then on top of that, this is now outside of our scope, is all the OSS, BSS pieces. And so every carrier is going to have something different there. Different vendors are going to have different solutions there. That's fine. Just give us the policy that controls this pod. So what I'm going to do now is look inside of the pod just a little bit. So when you look inside, you're going to find a very recursive definition. It's filled with services. You could call them microservices. I guess that's the vernacular today, but I just call them services. And I'm going to have a service that corresponds to virtual OLT, a virtualized version of the OLT device, a service that corresponds to the virtualized CPE, and so on. And there's a service graph. This service depends on that. Service depends on the following service. It's not linear. Sometimes you ask the CDN for some data. Sometimes you go out to the internet. When you ask the CDN, sometimes it gives you the data. Sometimes it has to go out to the internet to get on cache mesh. And that's just basically the main path in the reference implementation. We can now populate this with other services. And in fact, everything is a service in this model. OpenStack is a service. It's the service I go to to get VMs. Onos, the open network operating system is a service. It's the service I go to to get networks. We have a monitoring service, and you could add a whole bunch of other building block services to that, but just as a starting point, we definitely needed a monitoring service, and so on. It's a very recursive definition. Everything looks the same, and I'll start. Basically, I'm going to just drill down at each one of these objects and show you a little bit about what's inside. It offers a northbound interface. I told you there's a RESTful API, but you can come talk to it through by giving it Tosca specs. But basically, you would give the spec that says, this is a service graph I want to populate my particular pod with. These are the resource policies I want to impose upon those services. You can put bounds on how many of this and that each service consumes. I can give you a scaling policy. I basically say to the service scale up, scale down, but I don't tell it how to do that. That's up to the service. And I can then come and impose customer policy on it. Customer A wants this particular set of features. Customer B wants that set of features. So from an OSSBS point of view, whenever a user logs into the subscriber portal, it basically comes down and does the appropriate changes to that particular subscriber's account. So the other way that you can think about this is those are all very generic high-level specifications, but essentially, we're going to expose the interface of each one of these services northbound as well. And so I just think that that interface is really the union of all the service-specific interfaces. So if you wanted to change the features of a particular subscriber and turn parental control on, for example, that would be a call that would effectively get passed down this particular sequence to tell one of those services knew how to resolve it. In our particular case, it's the virtual CPE. So I'm showing you a picture of what's inside of a core reference implementation. But what I want to impress upon you is that it's actually extensible by design. So this is where we start. This was, in fact, the service graph that we started with for the proof of concept. But as we've gotten a little bit smarter, we wanted to change a couple things. So watch closely. The virtual CPE has been replaced by a virtual subscriber gateway. And the virtual BNG has been replaced by a virtual router. Just a little text change here, but it was really just basic modularity. We just substituted one implementation of the service for another because we wanted to make some changes. The service graph looks exactly the same. The rest of the framework works exactly the same. Well, why do we divide it into these services? Why is OLT carved out from the virtual service gateway? Because I might have multiple access technologies. And so GFAST is another technology that we're prototyping and that list could grow. Well, this is just focused on residential subscribers. What about the subscribers and enterprises? They just tack on by giving me a different service graph. So maybe it's a different access technology at this end, Metro Ethernet. And maybe the CPE is different because you give services different bundles of functionality. But it maybe shares the CDN because I'm caching content on behalf of everybody. And so the important point to take away from this, and I'm just sort of growing this as examples, is I can add additional cloud services. So you get Internet of Things, it's another service. You have other things you wanna offer mobile customers, their services that fit in here. And then you add additional access technologies as they become available. And that may be on a business unit by business unit cases. But it all fits in the same platform. So the thing to take away is that CORD is a platform, not a point solution. It's just what services I populate. And the way I define that is by the TASCA file I give it at the top. All right, I'm gonna drill down a little bit more. I've got these boxes and I wanna talk about what they are. Well, first of all, services are multi-tenant. This is really, the way we're thinking is what would Google do here? Right, this is not what what AT&T normally do. This is what would Google do. Well, they would offer as a multi-tenant service. Right, or Amazon. S3 is a multi-tenant service. You don't stand up a different storage service for every subscriber that comes along. They use a multi-tenant service. Each one of these are multi-tenant services. So when you turn your router on at home, powers up, you become a tenant of the virtual OLT service, which then becomes a tenant of the virtual CPE service, which becomes a tenant of the virtual B&G service. And each one of those services then offer a service abstraction. If you go to S3, you get back, oh shoot, what do they call it? A bucket or a volume or whatever S3 calls it. If you go to Volt, you get back a subscriber VLAN. When Volt becomes a tenant of the virtual CPE, it acquires a subscriber bundle. And when the virtual CPE becomes a tenant of the virtual B&G, it acquires a rotatable subnet. Okay, and the virtual CDN does exactly the same thing. It is a tenant of the virtual B&G and it gets back a rotatable subnet as well. Well, it turns out the virtual CDN does not have subscribers as its tenants. It has content providers as its tenants. There's really different kinds of tenants in a central office. And so the tenant abstraction is a CDN. If you're a content provider, you're CNN, you're Apple, you're Netflix, you're whoever, then you will acquire a region of the URL space and you will have that region cached by the carrier, by their CDN. So services are multi-tenant. So if I look inside of this, this is in some sense the real crux of the story, is we're simply modeling VNFs as services. Well, this is my picture, what does it mean? Basically, it's a simple matter of separating the control and data plane, the controller. This is borrowing directly from SDN. The controller for the service is like a controller for a network. It's the control plane. That doesn't get in the way on all the data plane, movement of data, whether they're HTTP GITs or packet sends, that's all in the data plane, going to instances. The other thing is that the controller is the interface is system-wide, or network-wide, service-wide, and each individual instance is part of the implementation. So the implementation is the code, certainly, but it's also how do I achieve scalability and how do I achieve high availability? Those are implementation details. That's not up to the operator, that's up to the service developer. So there's a very crisp line drawn between the provider of a service and the user of a service. The box also implies a virtual network that the tenants of that service use to send messages to request operations from that particular service. And those virtual networks can be configured or customized in different ways, depending on how that service wants to, for example, implement, whether it wants to implement load balancing or not. That's an example of what you can vary. So when you start down this path, you quickly realize that there's an important distinction between tenants and instances that are often conflated. So in the case of the virtual CPE, and as a side note, the instances here are containers, not VMs. It's actually a one-to-one situation. Every time a new subscriber comes online, they get their own, their tenant abstraction was a subscriber bundle, and that subscriber bundle runs in its own instance, its own container. So we've got a container per subscriber. It turns out then in the implementation. But it's not always the case that you get a tenant per instance. A lot of scalable web services scale the tenant abstraction, whether it's a key, a value key store, or a CDN, or a scalable storage service. That's scaled across a set of instances, so the tenant abstraction, maybe through some DHT or something, is scaled in that way. So in the CDN case, the instance is actually a VM. So we're mixing and matching VMs and containers, and we have different models. Again, it's a service dependent question as to how that happens. Well, you've noticed that I've been using a couple of different colors for my services, and there's a reason for that. These services are implemented. The functionality, the behavior of the service is implemented in a VM or a container. And it's on the data plane. The packets actually pass into and out of the CDN. They pass into and out of the containers represented in a subscriber bundle. By the way, if I were to drill down in the subscriber bundle, I would find a Linux implementation where I turn different features on just by twiddling different aspects of Linux. If you wanted to put some modular system in there and turn features on by plugging in unplugging modules in the kernel and user space, that would be a perfectly good implementation. Still fits in this model. Okay, so these services you can kind of think as VMs on the data plane, the red ones. The blue ones are services that actually run in the control plane controlling the switching hardware. So now the picture gets a little bit more interesting because we've got multiple services to get the job done. So Onos is Owen Labs open network operating system. It is, I'll show you another slide in a second, but it scales, just like any other service, web service or cloud service might scale, uses OpenFlow to control the switches that sit below it. And it hosts multiple tenant applications. Virtual OLT and virtual B and G. But those aren't the only applications it hosts. It also hosts a neutron plugin, so that if in an open stack space, if you go and ask for a virtual network, it's actually being provided by, excuse me, it's actually being provided by Onos through the Onos neutron plugin. There's another application running that is doing segment routing, and I'm not gonna go into the details of segment routing, but it's basically the way that we set up flows between the top of the leaf switches here through the spine and down to the leaf switches on the other side. And so those are four apps all running on top of Onos. So the way that you think of Onos, and it is an operating system, or it's a platform as a service. It hosts, it's a platform that hosts applications. So we model it effectively as a platform as a service. But again, everything's a service, including platforms as a service. It's tenant abstraction is a control app. So you go to it as a tendency, I'd like to run the following control app. And that happens multiple times. So if I were to drill down one more and look inside of Onos just for a second, I would find a fairly typical scalable service. It's a service, excuse me, running in the control plane instead of the data plane. So it scales across a set of instances. And internally, it has a shared data store that's implementing a network graph. So that's its abstraction. And then the apps, those control programs basically get spread across the instances as well. So it scales as the white box switches, the networks grows, and it gives you high availability. So if one of these instances were to die, you still have others running. Okay, so those are the pieces. I'm gonna stand back, so I've done on my drilling down, I'm gonna stand back at a high level for just a minute. So I'm slightly abstracting here to keep my picture simple. But the purple box is the cord pod. It encapsulates a set of services that run inside of it according to some service graph. I think of that as the executable representation of the policy that I was given by the operator. So from above, I'm handed a policy, a specification of a policy to implement. I turn it into my internal representation. And then I turn on it and out comes a deployment of that particular policy on the underlying hardware. So I'm showing in this particular case that the output of this box is three virtual networks. One we call the virtual CPE LAN is being controlled by a control application. I'm not showing ONOS in the loop, but ONOS sits in between. So this control program controls this virtual network. Virtual B and G controls the WAN side virtual network. This is just a vanilla virtual network that I get from Neutron that's interconnecting the virtual CPE and the CDN. The CDN has also then connected the WAN because it has to go out to the white area network on a cache miss. Each one of these is then potentially scalable across a number of virtual machines or instances, containers in one case, virtual machines in another. So that's the output of the system. So what was the magic in here? Well, let's just go back to that high level picture again. It's a controller. The controller in this entire situation on behalf of the cord pod is where XOS fits in. So XOS is a controller for a set, well, what was the cord pod? It was this service delivery platform. It's a controller for a service delivery platform. Well, what is it a little bit more detail? I think of it as a framework for assembling and composing services. So assembling in the sense that if you gave me a virtual machine instance, I would have to do some work to turn it into a scalable service. That's a set of mechanisms that XOS gives you to elevate it to that point. If what you gave me was a virtual CDN that was already a scalable service, it had less work to do. But if you gave me just a VM image or a container image, you would have more work to do to turn it into a service and then the composition of those services. Just a little bit more detail. Internally, the implementation takes advantage of two pieces. The first piece is it's a data model. We think of that data model as representing the authoritative state of the system. This is what it ought to be. And then a back-end system that we call the synchronizer that keeps the operational system in sync with what the authoritative state of the system ought to be. We built, I've already told you all these other open source projects we're building on. The data model is implemented in Django and the synchronizer has a lot of pieces to it that we built, but it also leverages Ansible as it's kind of back-end mechanism to talk to all the other components and tell them what to do. So basically you can think of the synchronizer as generating the appropriate Ansible script recipe to do the right thing. It pays a lot of attention to partial failures. And so there was another session earlier that I couldn't get to, but FS check for the cloud. Things just go wrong. And you're constantly working to drive the operational state towards the goal of the authoritative state. And that's a constant synchronization problem. It's not like I issue a command and it just happened. If you think about FS check for just a second, what could be more deterministic than your disk? When you tell it to write a block, it really ought to write the block. But even disks go bad and things happen. There's a whole lot more complexity between you and what happens in the back-end in a cloud setting. So the synchronizer works hard to keep those in sync. OK, that's kind of an abstract description. If I were to just show you instead of the architecture, the software stack, it would look like this. XOS in some sense sits on top of Onos and OpenStack. But OpenStack, don't forget, also uses Onos as its mechanism for delivering virtual networks. So there's a somewhat subtle but very important point here, which is Onos, we have virtual networks that are implemented by services. And we have virtual networks that are used by services. So we're supporting both cases. You could just use a virtual network. You just use the neutron interface to get to it. But you might actually implement a virtual network, which is implemented a set of flow rules that give you some behavior. For example, the behavior of virtual OLT as an app that runs on top of Onos directly. XOS just basically unifies all of that. So I don't care if your virtual network came from a bunch of instances in the data plane, or it came from a control app controlling the switches. It still looks like a service. You sometimes have to do more work or less work to assemble it. But at the top, you basically now have a set of services, and you can compose those services. So I'll do some questions. This is my last slide. I think there's plenty of time for questions. So again, the software distribution. Remember, there's a hardware recipe. The software distribution consists of these components. These three are basically in the control plane. OCP is what's running on the actual switches. It's the software in the switches. Again, this is an open source reference implementation. So there's a set of services that we're going to populate it with. And hopefully, that set will grow over time. And these are basically the access services. There will be other services as well. Monitoring is one that we already are working on pretty hard. Content delivery is one. This one doesn't happen to be open source. We assume that many services that you populate your cord pod with will, in fact, be proprietary. But it's one that we're using for demonstration purposes. There's more information. Those URLs are really, really long. But if you go to xosproject.org, it's not a very long web page. You can find the various papers associated with this in some of the presentations. So it looks like we've got a little bit of time for questions. So I'm happy to take them. Oh, I'm sorry. You've got to use the mic. I am told. Just a quick question on the previous slide. So you use open source for VCPE and VBNG? No, not from a vendor. I'm sorry, here? Yes, VCPE. What vendor? No vendor. Is it open source? This is open source. So this is a sufficient for the market trial with AT&T. This is a sufficient manipulation of Linux to give you a primitive set of features. And you're happy with the performance and scalability. Yeah, so AT&T in particular is extremely adamant that we run these in containers natively on Linux. We also run containers nested inside of KVM. But in terms of scaling up, so what happens, I could have sort of walked through a flow. When you turn your router home, you're assigned to VLAN and you get a container at the other side. That's your container that you can then configure. Another higher level question. Is there any concern from your customer about the privacy? Because now you mix CPE at home, like a tool. So some home internal traffic can go to your network, to the VHU. Yeah, so you're asking about privacy of it going in. That issue has not come up. But a related issue, which is it turns out you will want to run a little bit of functionality on the home CPE so that your home network doesn't die when your internet connection goes down. So you're probably running DHCP from there. But the issue of security of the privacy of the user's data being in the container has not been one that anyone has talked about. That may come out of the market trial. Good point. My question is too, do we need the exact application or do we need a combination of the application and the owners? Because it is a matter of implementing that application on top of the audio or the owners. Well, so from Court's point of view, some system doesn't matter, right? I mean, our implementation is based upon on us. The only key is that it is scalable and highly available in support of those apps, yeah. Because I mean, we're in a greenfield setting here. So which means that either whichever controller it may be, we need whatever the customer is accessing the application itself, control application. Sure, sure. Yeah, from an XOS point of view, ODL is a service. And then why the benefit of the owners for this kind of application? Oh, you want me to do an onus versus ODL discussion. I think I'll defer. Well, OK, I will. So I'm not deeply embedded in the onus project per se. I'm an user of onus. From my point of view, controlling switches with software-defined networking includes a combination of configuration management and control management. Legacy is strong. Legacy devices is very heavy use on the configuration side. Very little need for work on the control side. Greenfield, white box switches, it's 90% to control. There's not so much configuration to do. So if you're heavy on the control side, that's a really high scaling. Because just the time scales, it's a very rigorous scaling availability problem. And you need to design for that. And so I think that's onus designed. And I think ODL perhaps designed more for the configuration side. OK, I see. And then the last question is about the tenant in the CDN and the cases. So is that tenant is just like a client in the CP? You could call it a client, yeah. I mean, you have to distinguish between the principle that's the tenant, a subscriber or content provider, and the abstraction that the service gives you. It gives you a subscriber bundle. It gives you a volume. It gives you a virtual CDN. So there's an abstraction that the service gives you. That's kind of what I was focused on. So which means that it can be a virtualized version of the control plane of the CPE for that customer. A virtualized version of the control plane for the CPE. Yeah, I guess so. Physical CPE. Yeah, I mean, right. So that is, in fact, what you get here is you get a virtual. Control plane of. Right, you get a portal that you can, as a home subscriber, you log into and say, I want to turn parental control on for this iPad. Yeah, right. Now, what's interesting is that you didn't know, certainly as a subscriber, you wouldn't know this. And the architecture didn't know which of these services were going to satisfy that request. So the request enters from the home, and it gets passed down this graph until somebody can resolve it. So whether the virtual OLT implements parental control or the virtual CPE implements parental control, I don't care. So you basically make your default gateway to the upper plane in the somewhere in your tenant in the virtualized instance. All right, thank you. So how can you differentiate, I mean, when you are going to use in Tosca and when you are going to use in Yanmordo? That's an operator's call. I mean, we have found Tosca to be really good, direct representation of what these services do. Yang tends to be the preferred language of, certainly AT&T and other operators. Although I've heard talk that, well, it could be both either one. You mean they can only pick only one? No, no, no. We've just implemented Tosca so far. Yang is to come. OK. You could have multiple languages. There's just different representations of the API. Internally, it's all model-based, but the model internally is neither Yang or YAML. Internally, it's implemented in Django. Yeah, Django, but how can you do Jambo, right? How can we do Jambo? How can we mix them together? Because the YAML is more like non-working oriented and Tosca is one of the most resource-oriented. It certainly could be possible that you would represent different things in different languages. We haven't tried that, but it ought to be possible to do that. Because they both map into one internal representation. Coming from Yang's side, in the last six months, we've changed pretty much all the models in ATF to address what OpenConfig wanted. And AT&T is part of OpenConfig. So we include that state, we include telemetry in every model. You do anything, there is shadow, three, we should present states. Where is the state? In this case, how do you correlate config to B versus config S, E, and state as a result of the change? Well, it depends on your point of view. So from chord pods point of view, just a second, I was going the wrong way. The authoritative state is the representation in Django. And so there's a persistent database underneath that. And so I write to that persistent database, that's my view of the authoritative state. Now, if you're an operator and you think you have the authoritative state sitting somewhere in your OSSBSS system, and you want to hand that to us through the modeling interface, then arguably you own the authoritative state. You get back operational state, you need to compare it to your... Oh, where do we get the operational state? Sorry, I misunderstood. That's, so there's a couple of things. Part of it comes from the monitoring service. So there will be a lot of operational state there. But it's also the case that as we generate, as we use Ansible, we're doing it in such a way that it's attempting to drive the state. So it's item potent operations. We don't know for sure what the state was. We're always driving it towards what we think it ought to be. I'm sorry, I'm taking so much time, I'm just trying to understand. So how do you correlate operational state to your declarative state through Ansible and some kind of black magic? Yeah, it's magic. Seriously, it's part of the question for operations. Yeah, yeah, so from a human point of view or from the point of view of an analytics engine sitting up here, that's going to be through something like Solometer. That will be the operational state. So now in terms of correlating, it's going to be based. So everything that's recorded here is the events are all tagged that are traceable back through the state that's in the database here. Yeah, tons of details there, but I know what subscriber, I know the VLAN, I know the subscriber, I know the container ID, those are all correlatable in the database and marked appropriately in the events that get stuck in the Solometer. Do you feel going towards Yankee is the kind of way you are going to go or it's one of the ways you are considering what is the end result? End goal? This particular? Going towards Yankee seed. Something going to happen with onus or it's just something you are considering but not necessarily. I'm not quite sure I understood the question in terms of Yang and... In terms of going to switch to Yankee completely. Oh, Yang completely. So it's still undetermined exactly how deeply embedded Yang is going to be. So we're having conversations with people that know Yang much, much better than we do. Yang Forge, for example. And we can see a roadmap whereby we would use Yang Forge instead of Django. And we could get there. But it doesn't have persistent state and I want persistent state in here. Thank you. Okay, looks like we're done and there are no more questions. So thank you.