 So welcome, welcome to the last day of OpenSec Summit. Hope everybody's been getting a lot of good information and you're not too hungover from last night. So with that, let's get started. So we are going to be talking about the edge but in a way that's very different from what most people talk about, which is hardware. And the reason we can talk about it is that we've been spending the last two years living on the edge. I'd like to introduce Jason Kat, who's been the product manager for, you know, Jason and I, we were the people who came up with this stuff and we've been delivering it. We have thousands in place at this point. And Glenn, who also used to work with us and now went over to the enemy. So Glenn also lived it as well. So we are going to talk about the challenges of working with hardware at the edge and also some of the things we've done to mitigate some of those challenges. So when you're not living in a data center, you're living in other places and some of them are pretty rugged. So we have deployments. Now, our product actually is mostly in fairly comfortable offices, although we do have a customer that puts them in grain elevators and we have another customer that puts them next to ovens. So, oh, and another one that puts it in manufacturing environments. So yeah, some pretty tough environments. And in the 5G world, you stick them on top of cell towers. You're putting them in light poles. And, you know, these are environments that are insecure, you know, open to the elements, you know, and you have to think about that with the hardware. You can't just, you know, buy your basic laptop and stick it on a wall. So we also have, you know, what's cloud all about? Hardware and software disaggregation, right? You want to have generic hardware so you can kind of replace it as you update. But, you know, can you really, you know, is that actually impossible at the edge? So that's one of the things we're going to be talking about here. Another thing we've been running into is we do what's called service chaining. So we put a series of, excuse me, network services. You got some gremlins. I do have some gremlins. Okay, so we have a series of applications and they're all network services to work together and it's called service chaining. And what we've discovered is that we actually have to test each one of these. So that's actually one of the value ads that we've had to do and that we sell to our customers that we certify that these these combination of applications work together, which is something you wouldn't necessarily do in a cloud environment. And then, of course, anything involving maintenance or getting the service to where it needs to be is a challenge. Because, again, you can't just walk over to the data center or just log into that secure site. So everything gets multiplied as we blow out to the edge. So I'm going to talk a little bit about the product itself, which is virtual network services. We did a big presentation when we first introduced the product at the Boston Summit. I don't know how many of you were there. We did it on the stage. The product had just been launched literally like two months before at that point. But two years on, we've added service chains, we've added vendors, we've added services, and the product's been very popular. We've also extended the platform. So we originally started on a box at the edge, and we've since expanded it to the cloud. So we have a cloud option, the hosted option. We also have a lot of hybrid environments where customers will buy the Universal CP and put it at their sites, but also buy some services in the cloud. That's a very common scenario. That's actually one of the value-adds to the customers, because they can buy, let's say, a firewall at the edge, but they can also buy a firewall in an environment to optimize their network performance. We've also extended it recently to the public cloud as well. So we're connecting to Amazon Web Services, and there'll be additional public clouds going forward. And here's just a very quick slide of some of the various deployment options. And again, that's something our customers are really excited about, the fact that they can put it together into a lot of different architectures. One of my co-workers says, we've reviewed over 400 designs and he thinks there's actually 400 unique designs. I don't think that's actually true, but he likes to say that. So it's actually a really nice thing that our customers can put together things that suit their requirements. So starting two years ago, we had to start somewhere, right? So the edge wasn't even... We didn't even have that term two years ago. Edge computing was nothing. So we started... We had a box. It was a single size box which we quickly discovered was not enough. It had limited functionality. It was not really what you would call a white box. It was a gray box. And it sort of was standalone, so we didn't have automated remote provisioning which is something we'll talk about a little later in the presentation. And it didn't really do the job. And that's kind of the bottom line about it. So we took what we learned because this is all about Agile and we really extended to sort of the next level. So I'm just going to step back a little bit and just do a real quick level set about the difference between white, black and gray boxes. The white box is sort of your traditional appliance. So Cisco routers, the ISR, those are considered black boxes because the hardware and software are tightly integrated. They're proprietary boxes typically and it's a closed system. There's nothing wrong with it. It's perfectly fine but it doesn't give you the agility that you really need in a cloud environment and in it. So then there's another called a gray box typically. And a gray box is typically packaged up again. It's a proprietary system typically. It has a lot of the same problems as black boxes have but it does have a little bit more open and the ability to add some third party applications to it. Move on to the white box where you continue to disaggregation from the hardware to the software and that's sort of the ideal, that's where we want to go. There's just a little another slide that sort of layers out all the things that need to go into a white box. Just some more slides to show more. You can watch it later. So these are some of the components that we've included in our box. So we've obviously put open stack on the box and to do that we obviously had to strip it down to cram it into a box. If you think about it, a box has a limited amount of CPU, memory and storage and you want to optimize we're selling services to our customers and those services take up CPU, memory and storage and we want to optimize the amount of those resources that are available for our customer applications and applications underneath. And that's particularly acutely in our particular application but I suspect that other applications as they come through the telematics and some of those other applications will also have that same type of requirements that you want to really minimize the management. So I think this goes over to Jason. Does not? Talking about hardware disaggregation. Yeah, no, I've not looked at this one. Should I keep going? Yeah, keep going, I guess. I'll jump into one or two of them. So where does that leave you? So the reality of hardware disaggregation is we're not perfect yet particularly out at the edge. Within the data center it's easier to do. So we're not a hardware manufacturer that Verizon never will be. We rely on our hardware manufacturers to give us these services. So we have to trade off how much, how can we avoid vendor lock and how can we drop in different vendors over time and not even just even different vendors necessarily but just new versions of the hardware as the hardware moves on and becomes better over time. So one of the things that you need to think about and I'm not sure it really applied to Verizon so much is is it better to buy just a standard off-the-shelf product or should you consider building a specialized box from parts? So some of the edge applications, I know some of the SD-WAN vendors have kind of gone to specialized box from parts for Verizon that was not really an option for us to really have the infrastructure to support that kind of application and that kind of option. And of course continuous integration. We constantly are bringing in new vendors. We're constantly doing updates all of which all needs to be integrated. I mean, we have a lab that does literally nothing but test these service chains on the boxes and they spend a whole lot of time doing it. And just to kind of expand a little bit further on the disaggregation I think some of the lessons that we learned with disaggregation early on was I think in order to be agile and being able to scale and to be able to onboard some of the new technologies that are continuously changing it becomes more and more important to disaggregate. So in terms of at least when I was at Verizon we were really trying to kind of find our way in terms of how do we integrate the best of breed in both the hardware and the software side of things. And so when we looked at that we saw a lot of opportunities to kind of build a framework but I think where we kind of learned some of those lessons was there's a performance factor there that needs to be evaluated as part of that process in terms of it's not just taking a piece of hardware off the shelf and then taking a piece of software off the shelf and combining it together. It's really integrating the two layers properly and making sure that they scale with it. And I think that was sort of part of the learning that we were working through there in the initial kind of 1.0 approach. Yeah. I would agree. And my remark about the SD-WAN applications they've essentially turned into a gray box. And again that's a perfectly fine approach. It just didn't work for us. Yeah. So I guess one thing we've learned from 1.0 and our initial launch of this product is one size does not fit all, right? So actually what we brought to the market eventually was five different size platforms depending on the functionality that customers want. We're not talking to customers about platforms. We're not talking about hardware. We're not talking about boxes. We're talking about functions and services. And we fit the device into those decisions. You know, other things we've got to consider as well is lifecycle management. Provisioning decisions around hardware reloading. So one thing we've worked on for two years is this concept of zero-touch provisioning which is, as Beth says here, it's harder than you think. You know, this has been the hardest, the biggest project that I think I've ever been involved in is, you know, deploying zero-touch provisioning. I've got some, our friends from Advin in the room here who know all about that pain as well. So, you know, it's been a huge project. And that's the ability. This is the concept that the janitor can open the box out of the packaging. He can power it up. He can plug it into one circuit. Bang, it's gone. You know, this thing provisions itself up in a matter of, you know, days and weeks. You avoid the truck roll as well. But that's introduced some challenges like pushing software images over the water, right? You know, some of these SD1 vendor images are four or five gig in size. So you're pushing that down over a 10-mix circuit, remote circuit. You know, you're sitting there waiting for hours on end. So these are the things we've had to think about as we try to deploy things around, you know, zero-touch provisioning as well. And upgrades too, you know. Remote upgrades over the wire, you know, and other things that our partners are working on with us too. Not just on the operating system level, but also on the virtual functions as well. So, you know, these are all the things we need to think about when we're talking about, you know, remote upgrades, zero-touch provisioning concepts as well. So, upgrade in place is one of the things that, you know, we've just started to grapple with because it's a relatively new product. So, you know, we've been, we're shipping it, we have it out. And now we need to, you know, now our vendors are starting to, you know, it's been, you know, two years in. So the vendors are starting to give us new images. They want to do upgrades. And, you know, we need to be able to do them in place because that's one of the things we're selling to our customers, you know, that you don't have to, you don't have to have a new box put in. You know, you can just upgrade or change vendors or, you know, swap out and add things over time. So, we've had to grapple directly with that, well, how do you do it? And it's not just upgrading the network services application layer. It's also how do you upgrade the management layer underneath. But it's also how we automate it, right? You know, this is the key thing what we want to do with this because, you know, I guess the general concept is you don't touch a customer's box unless you absolutely have to. Some customers need different versions of features. Some customers need different, you know, things on their basis. So, we're automating on a per customer basis, too, which is a very tricky thing to do. Well, in, you know, UCP, universal CPE, right? So, you know, we do have this, you know, opinionated design about Ethernet access in terms of, you know, the role out of Ethernet access and how common it is becoming or has become. But I think one of the things that, you know, we also have to be cognizant of the embedded base that's out there. You know, we've got a lot of, you know, TDM services that still exist, which represent millions and millions of dollars of, you know, potential revenue sources for up-sale and capabilities and those sorts of things. So, when you look at those situations, you know, a customer may not want to say, look, I don't want to rip out my T1 infrastructure to put in Ethernet service. I'm just not ready for that. And that could go on either, you know, the left side of the UCP or the right side of the UCP, you know. So, you know, in this type of situation where you're able to deploy a UCP box, you know, leveraging things like smart SFP plugables, where you can do, you know, some media conversion between TDM and Ethernet services, it allows you to insert a UCP device into a situation that wouldn't otherwise support it and that allows you to add virtualization and NFV and, you know, VMs and, you know, services that, you know, the traditional, you know, TDM boxes cannot support. And that's an amazing amount of DSL out there. Absolutely, yeah. So I think, you know, we always talk about Ethernet and we always, you know, are looking at Ethernet, but there's an incredibly large untapped market out there. I would say it's tapped, but and it's an aging market, but it's something that, you know, is still something that, you know, I equate it to fracking in the oil industry, right? You can go back and frack your existing products with, you know, in terms of TDM technology and recover some of that, you know, some of the resources there. Well, and one of the biggest use cases for this product is tech refresh. So, and when you're going into a tech refresh and I can't tell you how many of the customers literally say the boxes that they have on their sites are out of, you know, the vendor does not support them anymore and they haven't supported them for five years. So they are looking for tech refresh and here we are, we need to provide a service that can connect to, you know, DSL, I think Frame Relay, I think there's only a few of those left, although there are still a few of those too. And also TDM, the T1, there was a huge installed base of that. So yeah, absolutely. That's a critical piece to think about when you're deploying these things out at the edge, because you have so much less control over that environment. Yeah, just to click on the call out here, you know, if you are going down this path of looking to deploy at the edge or white boxes, you know, you got to make sure you're picking a partner who can support you globally. So Verizon, we sell to our enterprises, we sell globally, you know, our customers want these devices in the far reaches of Brazil, to Africa, to Western Europe, to the US. So, you know, homologation is the key thing and probably the only thing I'll call out on this slide. You know, not only does the vendor you choose have to have a device which is fully homologated globally, but they've got to be able to ship it, deliver it, install it, maintain it, certified for those particular markets too. So in that particular topic in itself, you know, we grappled with it a lot. We've got other vendors who I'll talk about in a minute who are also part of that journey with us too. So that's a key through that. Yeah, and, you know, just some logistics blocking and tackling of sparing, you know, how many spares. We argue with our supply vendors about done stuff like that, you know, how many spares are they going to keep in what locations. You know, that's painful. I mean, it's the details. And from the hardware vendor standpoint, you know, the global distribution that's needed to be, you need to have in place to support these types of deployment modes is, you know, it can be very complex and, you know, I know speaking from Verizon's perspective, being that it is a global company, you know, there's challenges in each region that are unique to that specific region. And, you know, as a hardware vendor, we've been partnering with Verizon to try to, you know, work through those and sort of abstract that from Verizon as much as we can to kind of shield them from that. Oh yeah, distribution is another key one. Enterprise customers think we're Amazon, but they want to stuff the next day. Forget next day. They're expecting Dell to manufacture a server and have it on a ship and shipped, you know, within an hour. We could do that. I'll be the judge. Not to Singapore. There's something called Customs. So, and again, this is all stuff that comes up with hardware, you know, software, you know, you can send it over the wire, although there are issues with software, because there are certain pieces of software you can't load on a box and then ship it overseas. So, you know, we have customers that are in China, you know, and there's a great firewall and there's a whole bunch of stuff issues related to, you know, certain pieces of software can't go into China. So, you know, there's a lot of problems with all the pieces of software and just, you know, ship it off willy-nilly to everywhere. So, that was another consideration we had to think about. And that's where things like, you know, custom field engagements and things like that in your global distribution where you can kind of help, you know, shield from that type of thing and, you know, work through those local, you know, I guess, idiosyntricities that exist of being on both sides of the fence of this equation, you know, having worked with, you know, these two on the Verizon side and eventually moving over to the hardware side. And I think, you know, I feel like that, you know, I've learned an incredible amount in terms of, you know, both the underlay aspects and the over-the-top aspects in terms of the services that are going to be offered there. And one of the things that, you know, I saw as a learning experience, you know, coming over, you know, this is my first sort of venture into the hardware space and saying, look, you know, we've got to offer all of these services and really sort of identifying the fact that, you know, we've got to do better in terms of integrating, you know, the over-the-top services and adapting those to, and adapting the hardware layer to the over-the-top services. And so, some of that, you know, and that is just, you know, very simple things like performance and, you know, NUMA awareness. You know, some of the UCP models that are deployed in the Verizon portfolio today are, you know, two-socket dual NUMA configurations. And, you know, as we were working through and designing some of the deployment modes in terms of the services that are offered, we didn't give a lot of consideration to NUMA, NUMA Boundary and working with our software partners and that, and our hardware vendors, you know, we very quickly adapted the software applications to be able to support that. And so, that being said, you know, all of our software deployments today in terms of, you know, the services that lay on top of the hardware itself has to be, you know, for example, NUMA aware, right? We don't want to create any situations where, you know, we've got remote memory situations and we need to make sure that when we build our OpenStack flavors that, you know, it accounts for the NUMA boundaries and the application orchestration layers that are happening within Verizon's Virtual Network Services platforms are, you know, aware of that and able to identify which hardware platforms are there and, you know, what the NUMA boundaries are, you know, and the NUMA boundary is defined by which slot the card is populated in. All that mapping is metadata that had to be sort of collected and cataloged and factored into the automation aspects of, you know, orchestration. And so, that was a I think a good lesson learned in terms of performance as we started to align the software to the hardware architectures and vice versa, you know, we started to get more bang for our buck in terms of performance there. So, this is that, you know, that has been published by Intel. I think a lot of people have seen it already, but one thing that, you know, becomes very important is I think initially we were putting all of our eggs in the DPDK basket and well, part of that had to do with the maturity of the of the various. Yeah, I mean there was limitations and there was a lot of things and I think DPDK, you know, really sort of equalize all of that for us to provide us a, you know, a very consistent sort of performance. Right, but we still had to work with a lot of the, a lot of the vendors weren't really DPDK aware because again, these vendors were originally hardware based, but I think all of them were originally hardware based so they never had to think about DPDK. So, when we first brought them into the lab, they frequently would, performance would fall down and then we'd go back and say, oh, you didn't DPDK aware. Well, I think a lot of vendors had, you know, experience with DPDK and I think some didn't and, you know, through the selection process we were able to validate which ones did and which ones didn't. Yes. You know, that's kind of what brought us to some of the decisions that were made there from a software standpoint. So, I don't need to really go into the detail about DPDK other than, you know, it was a sort of a first step into, you know, look, we need to be able to provide performance at wire speed, you know, as much as possible when we're dealing with these, you know, WAN deployments. And so, DPDK was a very critical first step in doing that and we had to make sure that at least from the hardware perspective that all of the hardware was able to support DPDK. But one of the things that, you know, we continually see to evolve is it's not just DPDK. You have SROV that's out there as well and there's benefits to using DPDK over SROV and vice versa. And I think the next step is taking, you know, that knowledge and applying that to, say, use EPE 2.0 and making sure that we factor that into, you know, the next iteration. In that, you know, I think it was discussed in one of the Boston, I think, OpenStack where they had discussed the different north and south east and west traffic patterns where, you know, it's been demonstrated just, you know, this is data that we collected from the Intel team, but this is very, you know, very valid situations where you have service chains that are set up that are very similar to the Verizon configurations that you have today that are mostly north south. Well, so that's the point, right? So if you look at what we have today, you have the, you know, your WAN and your LAN, which are typically terminated, you know, your WAN's terminated towards your access networks and your LANs are facing your customer. Everything today, at least in the solutions that are deployed, you know, modern, are DPDK based. So your DPDK, all your cards are in the DPDK type of, DPDK, you know, doesn't necessarily perform as well in the north and south, but it does perform very well in the east and west directions. And so as we build through the next iteration of UCP, at least within the Verizon space, we need to kind of take that into account when we're setting those things up. SROV is very good for north and south, but it's very poor in east and west because of the choke points that are associated to, you know, the PCI bridge, and all of those functions back through the same PCI bridge for service chaining is not very practical, right? You're using a lot of hardware resources that are there. So, and I apologize to Intel that I didn't, you know, flag this in the slide, but, you know, I want to kind of give credit to Intel here. These are their screenshots. But they actually collected the data, and we're demonstrating this, and we were able to replicate this data across all of our hardware platforms leveraging the software suites that Verizon's using, such as ADVA. And we're able to, you know, clearly demonstrate that there are benefits to using a combination of both SROV and DBDK. However, you know, there's, there are other challenges to adopting SROV in those types of spaces. You know, a lot of your VNF vendors don't support that. There's the ability to, there's the lack of the ability to, you know, kind of carve those resources up in ways that, you know, are really service-chainable, I guess, if that's a word, and practical. So, you know, we, those are challenges that are unmet that I think, you know, in the next, you know, OpenStack summit that, you know, we should be able to come back and say, okay, you know, we've made some progress in terms of being able to, you know, get all of our eggs out of the DBDK basket and actually diversify. Yeah, I'd actually like to talk a little bit about that. Obviously, our applications are very heavily north, north-south because they're, they're network applications. But as the edge moves, not just network applications, but moves into, you know, more, you know, IoT type applications where there's going to be and smart cities types of things where there's going to be more east-west traffic between applications within the box. I think there's going to be some really hard decisions that will have to be made. Yeah, absolutely. And, you know, it's, it's a conversation that happens at the software and the hardware layer, right? I think when, you know, when we're building these boxes, we have to build them in a way that, you know, provides the primitives for supporting the over-the-top solutions that not only are here today, but, you know, are going to be there tomorrow. And tomorrow is a couple months away, right? It's not like, you know, we're talking years here. You know, we need to make sure that, you know, we've got the, you know, the reference architectures to be able to kind of scale out. So, just kind of driving the point home that, you know, the next iteration needs to be a combination of SROV and DPDK. It's not going to be a, you know, a simple DPDK solution going forward. I think we need to leverage the technologies where they are most suited, or most suitable. So, you know, I think, you know, it's been well-founded in terms of the data that's been produced by, you know, not just Dell, but by Intel and other folks within the OpenStack community, that this combination of SROV and DPDK are kind of the direction forward at this point. There are other, you know, solutions that are evolving, you know, especially in the smaller, you know, hardware architectures where, you know, you have the opportunity to use, say, Marvell chip sets and technologies where you have DSA and you can do distributed switch architectures to do, I guess, you know, some bit of hardware offloading and to where, you know, SROV and DPDK doesn't necessarily have to be the end-all be-all. So, you know, things are sort of evolving there as well. So those are just some things to look out for. But in terms of the lessons learned here, I think the lesson that is, you know, just very obvious is, you know, I think DPDK is great, but we need to sort of maybe move a little bit further if we want to scale. Thanks, guys. So, I guess, before I talk quickly about 2.0, and I've only got about four or five slides, so we can take some questions at the end. So with everything that Glenn's been talking about and all the challenges that that's been raising to us, you know, we're now at the point where we're just about to launch the 2.0 version of our product. So here's kind of a time-scale snapshot of this. You know, as Beth says, we've been working on this product now for probably about two or three years. You know, we launched it originally with our partners, Cisco and Juniper, using the gray box concept. You know, the 1.0 white box, which, you know, is now live in the network when our partners was launched about a year and a half ago. I think we had the launch of Boston. But, you know, we've also moved this now into our cloud, our hosted network services cloud. So we give our customers that opportunity to have their own services there. We've moved some of it into the public cloud. So SD1 functions was something customers wanted. We, you know, we want our SD1 functions as close as possible to our applications. And now, just, you know, in the next couple of weeks, we're actually launching, you know, the Universal CP 2.0 platform. So that's something I'm going to go into now in a few slides to talk about what, you know, what was the motives behind that and one of the pillars that drove us towards that. And 3.0 is coming next. You know, we're already talking about 3.0 concepts. You know, we're talking about how containers is a hot topic of this week as well. So, you know, how we can create a platform that can support containers, virtual machines, a combination of both or one or the other. So just before I go into that, you know, just two of the things I want to flag up here, you know, everything we're doing here, we talk about hardware, we talk about VNF vendors, you know, two of the key things that we've worked on over two years is orchestration and onboarding. You know, orchestration is the ability for us to press a button and for virtual machines to be spun up in combination of two. So this is, you know, again, another strong relationship we have with Ericsson in this place. You know, we've worked on this platform. We've got a lot of internal systems as well. So we created this. Well, just on the orchestration, yeah, yeah, exactly. So we can't forget that. But, you know, we created this huge ecosystem, you know, which is supporting, which is the glue for this product, you know, which is the ability to push this, the ability for us to do closed loop, the ability for us to do policy management, which can go into a portal. They can change their policies on the fly, you know, MACD, because now what we have is customers who want to change what they've done, right? So this is the concept and the flexibility of the virtual network services brings to the table. And onboarding as well is another key thing. So we've created a Verizon marketplace, and this is where we invite select vendors to upload their images, right? So, I guess, upload their QCAL files to this website, and it can do some streamlined certification to an open stack environment, because the reality is, as Beth talked about, the vendors we have in our catalog now, pretty much half of them didn't have an open stack compatible image when we started on this journey. So this is a way for us to streamline this process for these guys, because we spent a lot of our time testing this, getting them to a condition and to a state where this stuff can work. Not just from a provisioning perspective, but also from a closed loop perspective, because we've got a closed loop platform that we can connect, and if, you know, if virtual machines die, they can spin up automatically as well in different environments. And let me add, putting the... So a number of our customers and vendors have said, oh, well, this isn't hard. Now, it is hard, and we've spent, you know, we've literally spent three years at this, and now we've done has been pioneering, and particularly around the marketplace, today, but we're getting them ready for that. So a lot of the work around automation and the marketplace and everything is getting to the point where we can go to the vendors and say, these are the requirements that you need to do to make it... So look, so here's the reality. You know, this isn't lab stuff, right? This stuff is in the field, you know, as of the last time I checked a few weeks ago, we've got thousands of virtual machines deployed on the edge. You know, some of the customers, you guys will know, they'll roll off the top of your tongue. You know, the Fortune 500 customers. The challenge that we've got now, though, in this environment is... We have a couple little guys, too. Yeah, absolutely, but some big ones. So the challenge now is scale. We're projecting, by the end of 2019, over 10,000 virtual machines on the edge in OpenStack. So the challenge is for us is not only to scale provision to that, but also to manage it and to operate it, because we're now getting customers asking us to make changes and modifications. And one of the key things Beth touched on is service change. We've been testing service change for two years. You know, frankly, with all of these vendors in our catalog, we pretty much give the customers the choice to pick and choose the vendors they want, the combinations they want, the network combinations they want, the additional features they want, be it multi-serve access, or automated, right? In the old world, you didn't buy a high-availability Cisco router, right? You bought router one, you bought router two, an engineer connected them together and made it work. In this world, you're buying a high-availability pack, and automation spins it up for you, and they know they're associated with each other already. So there's been lots and lots of challenges there. But there's actually over 900 service-chain combinations which we can potentially deploy. We've tested 30. That's taken us two years to test 30 of these combinations, be it a riverbed service chain with a Palo Alto, with an SD-WAN vendor, like a Versa, et cetera, et cetera. So this is where all the work is as well. Again, there's probably another 20 or 30 on our priority list, and it's all based on what the customers want. So we have this voice of the customer, signed order, signed deals, that's what we work on, and that's what we do. So this is going to continue. This doesn't stop. We're not going to get to 900, obviously, 60 or 70, but this is the magic, and this is the glue that makes this whole thing work. And it's very hard. Yeah, it's very hard, because if you think about it now, if a customer wants a riverbed plus Versa networks plus Palo Alto on an out-of-operating system on a Dell white box, we've now got five vendors we need to bring to the table to make this stuff work. And when you're doing this stuff, remember, it's the first time we've done a lot of this stuff. You know, riverbed, we're just about to launch with some of the vendors as well. It's been a massive before. It's never been done before. So everything we're doing here is a first. So two little, you know, what do we focus on to make it better? There's probably five key pillars, you know, myself and Beth and the rest of the product team, you know, got our heads together on when we started to work out, how can we make this platform better? Extreme automation is a big thing in Verizon right now. I kind of touched on zero-touch provisioning, automated upgrades we talked about. MACD, you know, modify, add, change, delete. Now that this stuff is live, customers want to change, you know. I don't like this SD1 vendor anymore. I want to change it to a different SD1 vendor, you know, press some buttons, you know, 15 minutes, 20 minutes later, this stuff should be back up in service, you know. So this is the challenge of extreme automation that we're working on. Higher throughput. Believe it or not, these guys now want 10 gig in virtual environment. Yeah, they do. Out at the edge. This is the challenge, you know, we as a community have got to face, you know, that they're all in on this stuff, you know. They've got a lot of products and services now, you know, up to probably scale to one, one and a half gig, which is where the product is. But they want more. They want 10 gig. On a two core Atom. On a two core Atom, yeah, exactly. So it's completely unrealistic. But it's also, you know, the concept of the edge of the center, you know, it's not just the edge. What we see the enterprise wanting to do is take these edge platforms and put them in the center. Put them in the data center. Yeah, that's right. You know, which is a concept of kind of, which is interesting is they're firewalls, so they might have like 40 stores on one firewall that saves them a lot of money. But that's a challenge for us. Faster performance is another key tier. You know, we kind of touch on complex service change, you know, so that's a challenge in the hardware, which Glenn's kind of alluded to, but it's also a challenge in the software, you know, so for operating system vendors, for our VNF vendors, you know, the software needs to get to a position where it can perform even faster. And just two other quick ones, enhancing the customer performance, you know, so, you know, pre-stage and we kind of talked about, you know, they sit over us. They want to see how quickly we can turn things up. They put us on the clock, and they're on these conference calls, what are these customers, and they go, okay, let's go. You told me this stuff can be, you know, from power up to in-service in minutes, you're on the clock. So this is the experience that we have to deliver to these guys. And in-service scaling as well, you know, obviously this is a challenge, I think, in OpenStack to a certain extent today. You know, it's a virtual machine, you know, be it a 10 meg machine, the customer wants to go to 100 meg, or he wants to add some additional features, you know, so in-service scaling is something that we need to work on as well as the community. And then platform enhancements, you know, they want more. One thing that's kind of caught us by surprise is, you know, they want to put everything on these things. You know, they don't just want to put the WAN services on these things. They want to move the LAN into this box. They want to move IT infrastructure and all of these things are all in. Enterprises are all in on this concept. Yeah, we have one retailer that has a rack and they're like in gas stations, so the tiny little stores, and they're like, we want to get that space back. I mean, it's four square feet, but they want it back. And they're like, just, I think we should really just kind of put a stake in the ground and just get away from the term of CPE, right? It's not really... that's an old antiquated term. This is where they're doing these things. So, I think, you know, that should be 2.0, right? Just to get it right to this term of CPE, it's not that anymore. Just resources. Yeah. Yeah, exactly. You know, tenant space is the key thing that they want now. So, you know, this is a project that we're actively working on right now where, for example, the largest box that we have in our portfolio is a 36-core Dell platform. So, this is the two-stalker platform and then half of it for their stuff and then half of it for the network stuff as well. So, you know, how do we create an environment where they can host their virtual machines and we can host the network virtual machines in one environment and we support it, we manage it and we operationalize it. So, that's a challenge as well. Okay, so just quickly, you know, I've only got a couple of slides left, you know. Again, so what is it we did drilling down into the hardware, you know? I think as Glenn talked about, you know, we originally launched with low-end devices and we found out that, you know, working with Intel, we've had to move... And we did it for cost reasons. We did it for cost reasons as well. Yeah, we've got different cost reasons there. But we're moving to Denver to now with 2.0 and that's launching with that. You know, we're moving to Skylake D as well with the high-end devices. Kind of touched on the hardware side of things, you know, we originally launched like a low-end device with about four ports. They want more. You know, they want at least eight ports. So, we've doubling the amount of capacity in the mix and match of DPDK and SRIOV. You know, hardware memory, we're having to put faster RAM in. We're having to put, you know, faster solid state. We're having to put more disk spacing because they want more disk space, you know. I think our original high-end device is about three terabytes. We're now jumping to eight, maybe even higher terabytes. Well, because they want to put storage and caching. Storage and caching and, you know, I guess rate five, rate six redundancy as well. You know, vendors who can go into wider markets as well as another thing too. So, that's kind of some of the concept that we've looked at. And I guess in just the final slide before we take some questions, you know, we're not just offering it at the edge, you know. We're offering a complete solution in the cloud in white box and also in purposeful too. So, what we are seeing with customers too is they want a mix of all three. They don't just want one, you know, they're going to put in some of the smaller sites of purposeful device and some of the larger sites of white box and then they might put some firewalls in the cloud as well. Six minutes over time. I was glad. Thank you very much. It was my fault too. So, thank you everybody.