 So thank you for joining this session. This is, my name is Mark Baker. I'm joined by John Casey who is the founder and CTO of C-Play Networks. And we're just gonna spend the next half hour or so talking to you about a particular use case that we had with a carrier out in Asia where we combine our technologies to deliver a very compelling solution to the customer. John's actually gonna be doing most of the hard work and talking as he has with the great familiarity with the case. Before we do that though, I was going to just give you a couple of kind of intro slides to set the scene. So I'll reference, I work on the OpenStack team at Canonical, so working on Ubuntu OpenStack. So we have seen increasingly with the telcos carriers and enterprises that we've been engaged with over the last year that one of the drivers of adoption, one of the drivers actually of success of deployment of OpenStack is very much how you are able to operate the cloud. So many people will know Ubuntu as being a free technology. You can go and download, who uses Ubuntu? Far from Ubuntu. Good, good, so a lot of people use Ubuntu. You probably just went on to Ubuntu.com, downloaded it and used it. This free software is great, isn't it? But the cost of operating free software as software has increased in complexity has increased itself. And so trying to address that problem, Ubuntu is well known for usability, trying to address that problem, we develop sets of tools and methodologies and processes to ensure that you're able to operate complex software at scale in a cost efficient way. And that's really the success that we're going to be talking about today. So in the telco world, this means you have to address some very complex concepts. So if you're in the telco world, you've probably heard about Etsy. This is a standards body that defines how tools and technologies can address the requirements of a telco stack in a standardized way so that vendors like ourselves and others in the ecosystem can try and address pieces of that. And so when we're looking at how we automate this, how we drive the cost of operations down, we map our product sets against the requirements or the definitions that exist in this Etsy model. And this ensures that if you're a carrier like PCW or like Deutsche Telecom or like some of the others that we're engaged with in this space, you can use our tools and technologies and know that if you ever wish to, for whatever reason, there are alternatives out there you could choose, and you're not going to have to completely re-architect your entire environment. So our tools and technologies map against this very closely. The other piece is that operations, right, it's about, yes, maintaining the cloud, keeping it up, keeping it stable, understanding audit and compliance and those things, but it's also about upgrading. And this slide is really to show you that even though PCCW are deployed on Metaka, right, they know that they will need to upgrade to newer versions fairly soon to take advantages of the new features that exist because which is the best version of OpenStack, right? The best version of OpenStack is the one that delivers the features that your business needs. And that's generally going to get better and better with each release. And so being able to maintain upgrades or manage upgrades without downtime, without even API downtime, which is what we're able to do now is very important for these operators. And so if you have any questions about any of the pieces on here, I'm not going to drill into any of these features and how they relate to OpenStack in this talk. I'm more than happy to on the side if you wish. If you have any questions, please come and grab me afterwards or come and talk to us on the booth. But now I'd like to hand over to John who's really going to drill down and if we can flip the machines, he's really going to drill down into PCCW's use case and how we tackle that and how we addressed it. Thanks, Tom. Thank you, Mark. Thank you for joining the session as Mark said, John Casey, CTO and co-founder of Cplane Networks. So if you guys don't know who PCW Global is, they're a worldwide service provider, one of the largest service providers. And they're part of an $8 billion business globally. They are in 135 countries and over 3,000 cities worldwide. They own their own fiber. They're part of a consortium of fiber. They provide connectivity services, MPLS services worldwide across the globe, strong in Middle East, Asia, but also in the US and in Europe. So building the cloud for them. They started about 18, 24 months ago. Why? Why would a telco who's worldwide do this? Well, typically because telcos are going through a transformation these days, right? They're trying to reduce costs. They're trying to provide new services. They're trying to virtualize. And the way that PCCW went about this was a very smart way. They said, they looked out in the market and they said, we're gonna start with coming off the shelf products. We're gonna focus on doing what's easy first, you know, kind of cutting their teeth on the telco, the telco services. And, you know, this is a journey for them, right? And so 24 months into it, they're fully virtualized. And they picked a handful of vendors. We're one of them and there's some others in the room. This is in contrast. And I think that what they did was pretty smart in that, you know, most telcos can handle this type of transformation. If you look at what AT&T did, right? They hired thousands of developers and they forked a lot of code and put, you know, five million lines of code in open source. Now there's not too many telcos that can swallow that type of thing. So I think that this is certainly more palatable than to most telcos. So they're key goals, right? They wanna build a global cloud, a broad use cloud that they can leverage for short-term and long-term services. A globally connected cloud to basically reduce the cost of their operation, both CAPEX and OPEX. And then provide services, virtualized services for both their internal use services for their customers and also provide certain services, virtualized services for their customers as well. So in terms of their use cases, you know, provide a generalized cloud connecting to the common AWS model or Azure model, start going down the network function virtualization markets, extending their MPLS services with SD-WAN, provide virtual CPE to their customers, extending both to their enterprise businesses, the retail businesses, and also new markets and smart cities and whatnot. Then the fourth use case is quite interesting because once you have a globalized cloud, you can start thinking about moving functions that you would normally do in a central data center out to the edge. Edge analytics is a really interesting use case because now that they can collect data at the edge, they can process data at the edge, they can create the knowledge at the edge and then they can tune their network at the edge and then they can just bring some of the low knowledge back to the central location for analysis, later analysis, as opposed to huge back calls of data that they would normally bring to their back. And then also they wanna enable new services like IoT or gaming to their customers. So we were challenged about 24 months ago. They said, okay, we want to build, base our technology on OpenStack and containers, but we want to build not just one in a data center, we want from 10s to potentially thousands of clouds, worldwide. Anywhere we have a point of presence, we want potentially a cloud, right? And we want to establish distributed tenancy, right? So our customers or the applications we're deploying, we want those applications to be tenant knowledgeable across this entire globe, right? So if there's 30 locations that a customer has, that they have services is they want that distributed tenancy to extend. And because they're worldwide, they have a latency problem, right? So I think the average latency from somewhere in Hong Kong to somewhere in the US is around 260 milliseconds. So thinking about the problem of one compute node in Hong Kong and one in, let's say, Chicago, that's a long, long latency, right? So how do you deal with that latency? And then integrate with their existing BSS and OSS systems is represented in another challenge. Functionally challenged, they wanted to have a unified view. They wanted a global inventory of those services or their tenancy of their customer applications. So they want to be able to have, you know, understand where all these applications are for a given customer to be able to move them, migrate them, build them. And then they want to be able to say, location placement of functions. So the edge analytics case, I know I have a pop in some GPS coordinate. I want to find the best location within a few milliseconds of that pop to do my edge analytics. So I want to be able to decide based on GPS coordinates where to place these functions. And then so they also want to be able to support different types of applications. So NFE, customer virtualization, IOT services, a cord, that sort of stuff. Across this, they want an infrastructure that would support all of these application types. And then metadata is very important to just about everything in this world. And then they want to basically exhaust everything that they do, they want to exhaust metadata. They want to have metadata repositories where they can analyze everything that they're doing on the network. So how do we solve this problem? Well, first of all, keep it simple, right? Deploy individual OpenStack instances in a sharding method, so share nothing. Everything is individual. And that solves a major problem of areas of responsibility, areas of maintenance. So this really was in line of, they had different organizations around the world managing their OpenStack services so they could just fit them very easily in that operational model. And this also allows for specialization of hardware. So in some cases, they can deploy one version of OpenStack with DBDK, for example, for NFE. And then they can play around with accelerations of for machine learning, they could deploy another OpenStack either in that region or another region. And then for distributed tendency across the globe, they used a routed layer three, both VXLAN turning into MPLS, right? So that utilizes their existing MPLS network, very easy to integrate VXLAN into that. It also is compatible with their current network network model. And it's a well-known, well-understood technology that doesn't have any complex edge cases. And then in order to provide the metadata in order to provision all of these services, basically orchestrate OpenStack from the top, right? Metadata describes each OpenStack instance, what technology they have, what hardware they have. The customer relationship complexity is abstracted above OpenStack and the knowledge of where a customer is is abstracted above OpenStack so we can have this distributed tendency across. The APIs then to provision OpenStack are very similar to OpenStack APIs, but their purpose built for a customer and that's distributed tendency model. Let's see, unlimited scale, Billy. So what does this look like from a topology perspective? So we have, you see, we have, here's an example of two OpenStack environments connected with two tenants connected through an NFV solution that extends, you see the OGR, basically extends the customer VXLAN inside OpenStack into an MPLS VRF. This is how we get edge to edge tendency across the globe. And we also have the ability then to push that tendency in that VRF into an AWS context or even to a VCPE context up at the top, right? So we have the knowledge of each OpenStack environment, how that relates to the MPLS world and then into AWS and the virtual CPE. They use Mass and Juju for zero day deployments. This was basically reducing their deployment time of OpenStack from days to two hours, really. Our products are fully integrated, OpenStack is fully integrated, Mass and Juju. Additional applications and services like SAF and other of the key components are integrated into Mass and Juju. And it really did eliminate the mass of people needed to deploy OpenStack down to a few people across the globe. So there's a big win for them. And it also allows for rolling upgrades of OpenStack. So kind of the key SDN features that we provided within OpenStack, I'll just highlight a couple here. First of all, being able to traffic shape floating IP. So being able to reduce floating IP to they can reduce it down to 100 meg or five meg on a given tenant or a given VM. The ability to have a known app floating IP. So things like SIP gateways, you need the same IP address in both internal and external. So we can provide that within a VXLine context. L2 and L3 forwarding of course, service function chaining of course for VNFs. And the OGR capability is our VNF that extends VTOP through BGP into a VRF context. And then the ability to integrate with physical switches. So the virtualization is great but there's still a physical world out there. And the ability to extend a VTOP from a compute node into a physical switch is really important. Particularly on border gateways. Jump in here and just talk about our deployment of our controller model. So we have an L2, L3 plugin that sits on the neutron node. That sends topology events into our controller. Our controller will essentially create a tenant context or a tenant topology where that tenant has been deployed on compute nodes and kind of isolate that tenant, the understanding of that tenant to basically push flows down to the compute nodes. We have an agent that sits on the compute nodes that then pushes flows into OVS and things like DHCP services. So we have localized DHCP services. Basically it's a complete sharding model. So we don't try to distribute data across all the compute nodes which allows it to scale. And then we have a, as part of the tenant context, we launch this VNF service as part of the initialization. This allows us to extend this VXLAN into a VRF on a pertinent basis. And as well as being able to provision VRFs on a pertinent basis to extend those VXLANs across the LAN. And then we also push NetConf and CLI into the top erect tours. Here's an example of what the physical topology looks like. We take floating IP and SNAT into the compute nodes. We have an OGR node that goes to a MPLS network. We've got standard Nova services, SEF services and our multi-site manager capability, which then provisions over the top of OpenStack. So the multi-site manager, again, is the technology that provisions. It's an orchestration technology that sits above OpenStack that basically shards OpenStack instances and the data for OpenStack instances across all of the OpenStack instances. So it's a knowledge base for tenant context across OpenStack instances. It allows us to deploy applications or workloads across these tenant contexts. It does all the provisioning. When you attach a user into a site, it does all the provisioning of that user, Keystone, the project, the VRFs, the VNFs to extend the VXLAN, all the networks, the floating IPs. It pushes all that into OpenStack. So there's zero touch needed to provision a customer into multiple sites in OpenStack. It's very high throughput, so this can handle thousands of compute nodes, sorry, thousands of OpenStack instances, and it allows a geolocation service. So every OpenStack instances coded with a geolocation and the millisecond latency from every other OpenStack instance in the world. So it allows us then to do specialized placement of functions and applications. In terms of NFV, we've integrated the multi-site manager with RIFT-IO or the OSM model for RIFT-IO. So the multi-site manager has the global context or the global topology of all the OpenStack context. The OSM or the RIFT-IO then has the catalog network services so then you can use that combination of catalog of network services and multi-site to basically deploy multi-site network services across the globe. We use RIFT for the MANO and MSM. We use Juju for the zero day and day one configuration and then we have some technology that does day two and beyond network service orchestration. So here's an example of the multi-site manager deploying the virtual CPEs. So when you deploy a virtual CPE, you ship a box to a customer. That customer then will need a, you turn down that box, it'll register with a provisioning server. The provisioning server will then give, download the software into that box. The multi-site manager provides that context and then from there on, we can push through the OSM MANO, we can push NFV solutions to that box. And then we can connect that box through either the MPLS network or private line networks, central office NFVs to the NFVs out at the customer sites. So I'm gonna do a quick demo. So this is, I'll turn on a video and walk you through the video, but this is basically our multi-site manager. What we're gonna do is we're gonna provision two open-stack environments and we're gonna launch services on this, we're gonna launch services across two open-stack sites. So let's think about this application as, okay. So if you're deploying edge analytics, you're gonna want to deploy some analytic collection at two sites that are close to your routers and you're gonna probably wanna provision something in AWS to take a long-term data feed. So what we'll do in this demo is we'll go ahead and create a customer context first of all, call it NUCO. And that's information about that NUCO is held centrally in the multi-site manager. You can add some attributes to it, how many sites that they're deployed in or wanna deploy in. And then we're gonna go attach that customer to two sites. So now we're gonna go provision open-stack and push the context of this NUCO into two sites. So we'll just set up some network parameters. These will be turned into API calls that'll then push down from the multi-site manager into we'll call basically Neutron that will spin up the open-stack networks VXLAN. So now you see the API is going down the side and we're instantiating. So what this is doing is it's instantiating that customer in Keystone, two Keystones. We're creating the projects, we're creating all the necessary objects in open-stack and the networks, the floating IP networks, the internal networks and the MPLS networks. So we've already peered with the PE router in an ASN context within that VXLAN context. So now we're gonna add some external sites. So if you already have some AWS has been set up, we can then peer to a BGP context within AWS. So we'll just put in the parameters and the ranges of both the overlay technology. So basically this will be an over-the-top network. Okay, so that's where we reach AWS. We're also doing Azure. Let me just fast forward this here. Now what we're gonna do is we're gonna add some BGP entries so that they're basically like ACL so that we can extend the BGP context across AWS, Azure and our open-stack sites. Now we're gonna create some VMs. So in this case, we're using a GUI to create VMs, but in the general case, you'd probably create a Tosca document and then just as part of that Tosca document have some sort of site location that you deploy applications. So we're gonna create a couple of VMs here. And we're gonna attach those VMs to different network interfaces in these two sites. And then we're gonna also, set some floating IP quotas, right? So we're gonna rate limit the floating IP for some of these VMs. So that's just issuing a Nova boot, right? And the green means it's been started and you have full control from our APIs to do whatever you want with the VMs. You can even look at the console, start and stop. We just fast forward this here. So this is what it looks like inside open-stack. So that we've actually done the provisioning. And I'm gonna cut this short. Mark, did you wanna do a demo for here? I think we have about six or seven minutes left. Thank you. Thank you, John. So one of the, the tool that John was showing there, which is a Cplane network multi-site manager. Have I got that right? Yes, that's right. Good. Now I've forgotten where I wasn't typing my password. But it's nicely integrated. And this is one of the things I wanted to show you here. It's nicely integrated into our environment. And so this environment, this is the tool that we use to model open-stack. And this is the logical model of open-stack. You'll see that there are a number of the open-stack services here. So we're gonna choose one of them. It's Glass, for example, or Nova Compute. And the other piece is, this is the GUI logical modeling tool that we use, and we can then apply that actually to a physical environment. So we'll see the physical machines that we have associated with this, the open-stack services, and indeed the Cplane services running, Cplane network services, I should say, running inside containers. And I'm showing you this actually because they're running on these two orange boxes that you can see down here at the front, which is running a full Mataka-based open-stack cloud. This one right here with Cplane attached. And so if we go and drop into the Horizon dashboard, you'll see a number of different VMs that we're running. And this is all nicely integrated with the multi-site manager from Cplane. So if I was to launch a VM, in fact, just gonna do it, right? A little dangerous, I'm now going off script. There we go. Reload, boom, boom, this is always what happens when you go off script. But if I wanted to launch a VM, then that would, you'd see that come up and then be reflected in the multi-site manager with the network attached in the right way. So we've done some great integration with these tools to make it very easy for the telcos to operate. And this was the reason why PCCW and others are working with us and Cplane networks because it's not standing up open-stack, right? Most people can do that today, right? It's not even necessarily integrating a VNF, most people, sorry, an SDN technology or networking technology, most people can do that today. It's doing it in a way that allows you to repeat this deployment and operate it in a scalable, efficient manner, right? And that means tight integration between the tooling that we, Canonical, have done and so have Cplane networks to deliver value to PCCW. So I think we've just got a couple of minutes. Three minutes. Three minutes, if you had any questions about this, then we'd be happy to take on-topic questions and much more likely to be answered. No? Well, Bill, yes sir. I'm sorry? Sure, we're a software company based in Silicon Valley. Been around since before, around about 2013. We have about 20 employees worldwide and we have a architectural platform, a platform for orchestrating, it's a broad-based platform for orchestrating many, many things. All of our products are built on the same platform. We have an SDN solution, as you saw. We have VNFs to connect to MPLS. We actually have hardware provisioning, basically provisioning of hardware through CLI and NetConf and all that. The ability to provision MPLS and we have a multi-site manager. And we do extend applications above on this platform. Thank you. A lot of them are operational. I'm not allowed to talk about them in detail. I can talk about them in broad brush, right? So, but everything that we talked about is in some form of operation. Great, if there are no further questions, then we will, I promise, have this up and running on our booth. We're right inside the hole. First on the left is you're walking through the hole. So, if you want to see this in more detail and kick the tires and try it out, please do come to the booth. Do you play networks? Also have a booth on the show floor. The detail. Yes, so they can also talk at length about this. And if you'd like to know any more about how you can use this or about the PCCW implementation, then we'll be happy to talk about it. Thank you. Thank you.