 All right, all right, come on, keep coming in. Well, thanks for coming, everyone. I appreciate it. My name is Dan Wenland. This is where I usually say I work at NYSERA, but now I say I work at VMware. Either email still works, so it's perfectly fine. I'm the OpenStack Quantum PTL, and I'll also, obviously, core developer. If you're interested in updates on Quantum, you can follow me on Twitter. First off, I know there's a Quantum design session that's going on right now, too, but are there any other Quantum core devs here that want to just raise their hands and identify themselves? That's good. Everyone's getting worked on, except for you, Edgar. Should be up in the design session, man. OK, well, I guess if you have questions, you can find Edgar afterward as well. Cool, so the theme of my talk is taking OpenStack networking to new heights. So there was one really well-known skydiving incident this weekend, but there's actually another one, too, which is the first, as I know, of team OpenStack Quantum freefall experience. So we celebrated the full some weeks. This is not actually me. I'm way too wimpy to do this, but two of the other core devs did it, so I figured I'd open with that. So what we'll talk about today, well, how many of you have seen a talk on Quantum before? Can you just raise your hands? OK, all right, so I'll actually go fairly slowly then, just to make sure people are on the same page. Then we'll talk a bit about project status, and then we won't really have time to dive into a lot of the different scenarios in detail. I'm just going to flash a bunch of diagrams in front of you that kind of lets you understand the different modes that we see people deploying Quantum in right now. Talk a bit about looking forward, and then hopefully have some time left over for questions. OK, so the main motivation for Quantum, they're two big things of why we created the Quantum project in the first place. The first is that real enterprise applications have very complex networking requirements. They require multiple tiers that are isolated with multiple subnets. Maybe you need firewalling policies and intrusion detection system, et cetera. But the reality is that most cloud platforms, things like Amazon, OpenStack with the basic NOVA networking, have a very, very limited model in terms of what someone could do to configure their network and map it to what their application was expecting. So the first of the two things that Quantum really introduces is a tenant-facing network API. And we'll go into details about this API later, but it lets the tenant, just like it can go, say, create server to NOVA, it can go to Quantum and say, create network. And not just create network, but use this subnet of IP addresses on top of it, because I want to take this application that I built in my data center and move it to the cloud. And I want the network topology to map exactly. You could also, for example, uplink these networks to the internet if they need external connectivity. You can insert advanced services, things like routers, firewalls, VPN, intrusion detection, et cetera. You can also do things like monitoring to detect uptime, packet statistics, et cetera. So if you think about things that most people would assume if they were deploying a rich application in their enterprise data center, we need to bring those same capabilities into networking with an OpenStack. So that's reason number one for Quantum. So you can actually use Quantum, for example, to build rich multi-tier applications with firewalls, all kinds of different policies. Second thing is that cloud really stresses a network, particularly if you're building a cloud at scale. And really, there's no other reason to build a cloud. So first off, you have really strong multi-tenancy requirements. You need very strong isolation. But traditional mechanisms for multi-tenancy are pretty limited. I don't have time to say everything that's wrong with using VLANs for multi-tenancy. But everyone knows that it's not just the numeric limits. It's also that switches tend to have much lower limits even than the numeric limit. VLANs are only valid within a particular layer, two segments, et cetera, et cetera, et cetera. So there's also the fact that traditional networking gear is designed around a model that's a lot about manual provisioning. Someone actually H's into a switch and enables some VLANs. Everything about cloud, the entire point of cloud is to be on demand. If you have a human in the loop when you're provisioning an application, you've already lost in the cloud world. Additionally, the idea of cloud is to let you build large pools of capacity and potentially use the compute capacity wherever it exists. And yet, that's actually not how networks work. Networks tend to give you an IP address that's actually according to whatever physical pod you're located in. That would mean that you wouldn't necessarily be able to migrate, for example, VM from one part of your data center to another part of your data center. So again, the point is that cloud really stressed networks in ways that they weren't traditionally built for. So what you want is some super new technology. Now we're not going to say what technology, because quantum's not about a particular technology. It's about a platform and a plug-in architecture that lets you plug in different back-end technologies. So I have my list of buzzwords up here. You can pick your favorite buzzword, find out what people tell you that buzzword means, but you can use those buzzwords or any combination of them, or whatever vendor, technology, et cetera. Open source, closed source, we'll talk about that later. And you can leverage them to expose those generic quantum APIs. Does that make sense at a high level? All right, let's keep moving. OK, so now we'll talk about what quantum is. So quantum is another independent service with an open stack, just like Nova, or Cinder, or whatever. And so the basic model of an open stack service is that you have a generic tenant API that can be spoken to directly with code via CLI, or via GUI like Horizon. And those hit generic logical APIs that aren't about how the resources are physically instantiated. They're just about how they're logically consumed. And the operator, the cloud operator, can select different back-ends. And so, for example, on compute you may choose KVM as end server. You may choose the OBS plug-in on the network. You may choose Ceph or any other storage technology on the back-end. So this is kind of going to be how you should think about every open stack service, and also quantum in particular. So first, we're going to talk about that first part, which is the API. So as I mentioned, Nova, we're all pretty familiar with it as abstractions of a virtual server. So you can make an API call, say create me a server with these properties. And in quantum, it's very similar. You can say create me a network with these properties. The networking quantum corresponds to essentially a logical VLAN, an isolated layer two segment, that you can then associate subnets with. Subnets represent the IP address ranges that VMs will be allocated IP addresses out of. They also represent associated configuration, DNS configuration, host routes, that sort of stuff. The important thing to think about is that these quantum networks are completely isolated and multi-tenant in the same way that it doesn't matter that two VMs are running on the same physical server. They're completely isolated. It's exactly the same thing that two quantum networks may share the same physical network substrate, but they're completely isolated, such that the tenet is this very simple, clean, logical model, where they can think they're the only person that matters in the world. And that they just have their network, and they can build their network topologies as if they actually own the data center resources themselves. So here's an example just to talk through of how some quantum API calls might go. So you can create a network, associate a subnet with it, create another network, associate a subnet with it. The important thing is quantum actually supports overlapping IP addresses. So two tenants, for example, that want to use the same set of internal IP addresses for their applications. For example, they were using RSC 1918 space in their own data center. They want to bring that application over. They can continue to use those IP addresses. They don't need to reconfigure all of their applications. This is actually something that's very important in practice. And then you can spin up VMs. And as you spin up VMs, you can define the set of nicks that are going to be on that VM and what network they attach to. So I could spin up one VM on this network, one VM on that one. They're going to get IP addresses allocated out of these subnets that are associated with those networks. I could even spin up a VM with two nicks, one on each network. And then we're even just starting to venture into this world of more advanced network services. We'll talk more about this later. But you could instantiate a router, which is an LV forwarding device, and plug that into networks as well. And the router, for example, may be applying SNAT or DNAT policies or some type of firewalling. One thing that's very important to think about is that nowhere in this did I tell you how I implemented any of these things. I didn't say that actually tenant A net one mapped to a VLAN, because it may map to a VLAN or it may map to something else based on the plugin. Similarly, the router. That could actually be a virtual machine running Linux and routing software. Or it could be something totally different. The quantum logical API is completely decoupled from the physical implementation underneath. Like all OpenStack services, we also have a notion of extensions. This is kind of the OpenStack way of handling the tussle between trying to provide uniformity despite different technical back ends and the ability to let people actually innovate and take advantage of new features. So in quantum, we use API extensions for two reasons. First, to kind of trial new functionality. So sometimes when you're designing API, you don't get the abstraction right the first time. And so what we don't want to do is we don't want to bake an abstraction in necessarily the first time. First, we release it as an extension. And then if that's proven to be valuable and proven to be sufficient, we'll move it into what we call the core API. The second thing is that in some cases there will be features implemented by one plugin that will never, ever be implemented by most other plugins. For example, it's a very vendor-specific technology, but that cloud operator has decided that they actually want to expose that technology to tenants. So it could be something that provides quality of service or SLA guarantees. Could be security filtering, net flow. You get the idea. Anything that you would have traditionally configured via kind of a manual access to a firewall box or a switch or a router, you can expose by this quantum API. And if it's not part of our core API, plug-in writers can create extensions to expose that. And the important thing to think about extensions is I like this because sometimes an OpenStack project has a risk of becoming vendors fighting against vendors. But ultimately, extensions that were really the deployers of OpenStack get to decide what extensions they want to expose. And I think over time, effectively what that does is it really lets the market decide in terms of what are the abstractions people really think are useful. Because otherwise, it's just one vendor trying to make the abstractions look more like their stuff than other vendors. And that's not a game I like to play, at least. So extensions kind of give that flexibility and kind of outsource the ultimate decision of what are the right abstractions to the cloud operators themselves. Extensions can also be used, as I mentioned earlier, to oh, it just happened there. Someone have a clicker? OK. It can also be used to expose entirely new services, things like the routers or whatever. So now that was the abstract API. That's going to be the same across different quantum deployments, modular of the extensions. What we'll talk about now is the plug-in architecture. This is a high-level logical diagram. As I mentioned, you have this uniform logical API for all clients that will include the core API, which are the basic CRUD operations on networks, ports, and subnets. You may have certain API extensions that are supported by that plug-in. And then you have external tools like tenant scripts, GUI, other orchestration code that calls this logical API. And these API calls essentially get passed into the plug-in. And the plug-in is really the person. The code, I think of people as code of people, that decides how to map these logical abstractions onto the physical world, depending on the technology and the strategy that plug-in has chosen. So a simple way to think about a plug-in in a completely generic sense is that it's going to go out. It's going to talk to the switching infrastructure that's adjacent to each, if you're just using NOVA, to each NOVA compute node. So it could be that it just talks to the virtual switch. It could be that it talks to the virtual switch and the physical switch. Or it could even be that it just talks to a physical switch if, for example, NOVA is using a technology like VAPR or VNTag that maps VMNICs directly to the physical network by passing a V-switch. And then each plug-in is actually going to make certain assumptions about the physical network. And this is actually a very important thing to understand when you're looking at one plug-in versus another, which is to say a simple plug-in might assume that all VLANs are trunked everywhere. But other plug-ins, for example, that use overlays may say, well, it doesn't really matter what the physical topology of your network is. You don't have to access your hardware. We'll overlay on top of it. So that can be one of the key differences between different plug-ins. So this is something I like to use to really emphasize how there could be any back-end implementation to quantum. Some are good, some are bad. The simplest one in my mind would be, get an API call. They jump that JSON into an email. It goes to your network guy or girl. That network guy or girl does what they've always done in the old world, the SSH into a switch. They set up VLANs, et cetera. And then they send an email back and the web request returns. Obviously, this is a terrible, terrible idea. But it gets to the point of quantum doesn't specify what is actually happening on the back-end. That's ultimately the plug-in. And you can think of different plug-ins as being different strategies for how to solve the problem of automatically provisioning these network topologies. So when you think about quantum plug-ins, there's all kinds of trade-offs to think about. Just like when you're buying a car, different features, different quality, different aesthetics, all kinds of things, scalability, forwarding performance, what hypervisors are compatible with, again, what assumptions they make about the physical hardware. Do they assume the hardware is from a particular vendor? Do they assume there's some physical topology in your network, what manageability and troubleshooting tools come with it, what advanced features they have that may be exposed by API extensions, testing HA, both from a data plane perspective and control plane perspective, open source free versus paid. If you look at the plug-ins that we have, they span the entire gambit. And ultimately, it's about giving cloud operators the flexibility to make those choices. So the plug-in ecosystem, as it exists today, there's two common open-source plug-ins, which is the Linux bridge in the open V-switch plug-in. There's also another one that's called Raiu, which is an open-source, open-flow controller. And there's a couple of vendors that are behind plug-ins as well. So the Cisco, NICER, and NEC plug-ins are merged into the main repo. So they're officially supported by the core team. There's a lot of other people who have announced additional plug-ins, either I'm not sure if they're released or not, or whether they're just kind of demos at this point. But they're all people who are kind of subscribing to this model of implementing a quantum plug-in. And I think it shows really good momentum for the community. OK, so project status. So before we actually talk about features or what was in the Folsom release, there's actually one thing that to me is actually the most exciting thing about quantum in the past six months, which is that we've really had extremely great growth in getting not just new people contributing code to the project, but new core developers. So I think NICER, Cisco, and Rackspace have been involved for quite a while at this point. But just in the past about six to eight months, we've had Red Hat, Dreamhost, IBM, NTT, NEC, Internet, and other players really step up and start contributing to the project. And this is really, to me, represents a move from quantum to more of a true community model, which is something I think is very exciting. So six months ago, if you were at the talk, we had just released Essex. And Essex was our first, what's called, an incubation release. So we weren't considered an open-stack project, but we are not what's called a core open-stack project, meaning the core team, the docs team, et cetera, did not officially support our stuff. We were on our way to becoming core. So six months ago, we had a V1 version of our API that was really just about defining Layer 2 connectivity and letting VMs plug into different L2 subnets. Not about, for example, IP address management or anything else. And the second thing is, we only really delivered on one of the two goals of quantum that I mentioned before. The first goal that I mentioned was the tenant-facing API. And V1 of the quantum API actually wasn't truly tenant-facing. It was that NOVA proxied calls on behalf of tenants. And so tenants really couldn't define their own rich network topologies. There's more a model that the administrator could define them on behalf of them. The benefit that we did have in the V1 version of quantum was that you could have the different pluggable backends. So most people that, this is in production at several early adopters. For the most part, their driving motivation for deploying Essex was about being able to plug in different network backends, to, for example, avoid limitations on VLANs, things like that. Now, with Folsom, we're really excited because this is our first core release. This means we're official, just like all the other projects, we're an officially core project. We released V2 of the API. There's actually a pretty substantial change to the API in that previously IP address management had been handled by NOVA. But we wanted to change that for two reasons. First off, there are things other than VMs that may be consuming IP addresses. For example, routers, load balancers, additional network services. So you needed IPAM to be pulled out of NOVA. The second thing is that NOVA actually had a very limited model for IPAM. They didn't allow overlapping IP addresses, even if they're on different layer two segments. So we actually pulled IP address management and the associated DHCP functionality to inject those IP addresses into VMs. And we moved that into Quantum in the Folsom release. We also integrated with Keystone. Thank you, Joe. So that was obviously a critical part of actually being able to expose an API to a tenant, not just to another OpenStack service. And we integrated with Horizon. Anyone on the Horizon team here? Thanks for the help. So they're really great about that. So what's that? Gabriel sent you one. OK. We also had, we updated the CLI. We added a lot of new functionality there. We also had two very important extensions. We had a good number of extensions, but two ones that I really want to highlight here because they correspond to things that people commonly did with NOVA networking. So the first was the L3 router API. So this is the idea that you can create a logical construct that connects multiple isolated layer two segments and provides basic routing between them and then can act as a NAT uplink to an external network. Along with that basic L3 forwarding in NAT, there's a notion of floating IPs in NOVA. And this is how Quantum ends up providing an equivalent functionality. So floating IP is a notion that you can have VMs that are on private addresses behind a router. So they're protected. And then you can allocate a public IP that's one-to-one mapped to a particular VM. So we have kind of the L3 in floating IP support that's equivalent to NOVA. It's also a notion, another extension of something we call provider networks. So in most cases, when a tenant wants to allocate a network, they're allocating it from a pool of possible networks that that tenant could use. Let's say you have a simple plug-in and you're using VLANs for isolation. So there's some range of VLANs that are valid in your physical network. And when a tenant asks for a new quantum network, you're going to allocate one of those VLANs and use that for them. But in some cases, particularly a cloud operator wants to create a quantum network that actually maps directly to a physical network. You use this, for example, let's say you want a quantum network that someone could plug into to just connect directly to your physical network infrastructure and go to the internet. This is a pretty common model that's used in a lot of the clouds that we deal with. So this is a way that if a particular VLAN or a particular physical network is meaningful to the provider, but they want to expose that to let tenants plug into that network, they can create a what's called a provider network that creates a logical network directly mapped to a particular physical network. There's also functionality around quotas, which I would say very important. You can't let someone create it. If you only have so many resources in your network, you can't let someone create an infinite number of networks and notifications as well. So this can be used for billing or basic monitoring. So the quotas and the notifications are kind of just things that are in most open stack services. And we felt like we needed to do that by the time that we were a core project. So here's a little I decided against a live demo. Probably a good idea in hindsight. But I wanted to kind of walk you through the flow a bit in Horizon, which is obviously the open stack GUI. So you'll notice something new that when you're running quantum is you get a networks tab. And you can click on that and you see a list of networks. Now these may be networks that were created on your behalf by an administrator, or they may be networks that you decided to create yourself. So you can create a new network. You can click on it. You can specify not just a network name, but you can specify the network address, whether it's V4 or V6. This actually doesn't even expose all of the options. You can configure, is it a gateway? Does it use DHCP, et cetera. And then you create those networks. And then when you launch an instance, you can choose what network to plug it into. So this is a subset of the quantum functionality, but it's kind of all we got around to exposing in Horizon right now. But this is really the core feature set of you being able to create multiple networks and create interesting network topologies and spin up VMs on different networks. Oh, actually I have another slide later. So one of the main questions we actually get with quantum becoming core is what happens with Nova Network? And I've spent a lot of time talking to Vish, who's the Ptl for Quantum, about this. And our general rule is no forklifts. So it's a critical time for the OpenStack community in terms of convincing people that we're not going to go just redesign them and tell them they have to rip their old stuff out and put new stuff in. So essentially what we've done is we've said we're going to freeze, from a new feature perspective, we're going to freeze Nova Network. So no new functionality goes into Nova Network. And then what we're going to do is we're going to make sure Quantum is updated to cover all the key use cases that Nova Network supports. There's still a couple of gaps that we need to handle in Grizzly before that's really, truly happened. And then even then, because we're being extra cautious, even in that in Grizzly, we're not going to get rid of the Nova Network stuff. What we'll likely do is deprecate it and target that for an H release. So number one takeaway is don't panic. Your existing deployments with Nova Network aren't going away or it's not going to be immediately deprecated. And to some degree, Nova will actually probably always keep some network functionality, at least enough to do the basic flat networking. The other main question I get is, should I start using Quantum? Oh, it's CoreNow. Should I start using it? And my usual thing, you probably expect me to say, yes, yes, yes, yes. My usual take, though, is actually a little more cautious, which is to say, you have to go back to the reason we created the project in the first place. Do you have demand for creating an API to create rich network topologies? Are you limited by that today? Or is the flat networking model for you perfectly fine in Nova? Same thing about, are you being constrained by physical network issues like the number of VLANs or the size of your layer 2 pods, those types of things? If you're running into those issues, I'd say go for it. Quantum's designed exactly to be tackling a lot of those issues. But I'm not going to tell you to go to Quantum just because I think it's cool and I spend all my time thinking about it. If Nova network is fine for you, you can feel free to stay on it, at least for a couple more releases. So if you want to take Quantum for a spin, we've got admin documentation up here. We've got Ubuntu and Red Hat deployments covered in the admin docs. I will kindly request that you please read the entire document. Some people tend to say, oh, I've read the docs, but they haven't really read the docs. So when you read the docs, always come back. We're very, very happy to get suggestions in terms of, oh, I was looking for this thing in this section, but it was way down in this section or something like that. Or you actually did miss this. Feedback on docs is super valuable. It's obviously as well as bug reports. And then if you're a developer, it's been integrated in the dev stack for a while. So you can just download dev stack. You can look at this page for exactly what you need to put in your local RC, and you can just go for it. So the other great way to get more familiar with Quantum is that we're actually, my team, the NYSERA team, is doing a hands-on Quantum deployment workshop Thursday, 9am to 10 30am. And I guess this is Manchester E, right? Sounds good. So what this will be is actually, so we actually have our own internal cloud that's running OpenStack and NBP, which is our technology. And then on top of that, we'll be deploying little mini labs that you guys can use to actually have your own isolated setup of two hypervisors, a network node, and all of the OpenStack services that you can then basically do a basic setup, play around with it, ask us questions about how you deploy a certain scenario. OK, so I'm going to go through this stuff pretty quickly. The idea here is more to kind of give you a general sense of how Quantum works when you're actually deploying it, and what are the different scenarios that you can deploy it in. So this is a basic physical network layout. You can tell this looks probably a lot like what you would do in a Nova network scenario where you're using the VLAN manager. So for example, on the right, you're going to have some kind of controller node. Just like you run all of the other Nova API stuff and all of that, you're going to run the Quantum server there. The Quantum server is both a generic API code and the base plugin code that gets those calls from the API and dispatches them and goes and talks to various switches. You're also going to have obviously a set of hypervisors that are running Nova Compute and potentially an agent that may be specific to your plugin. This depends on what plugin you're using. And then if you're using the layer 3 functionality in the DHCP, you'll likely have a separate node in the production deployment that would actually be running not just the plugin agent, but also something called the L3 agent and the DHCP agent. This is how we inject routers and DHCP into your Layer 2 networks. So the traffic between VMs would be flowing on your data network. Traffic between VMs in the router, for example, will be flowing on the data network from a compute node to the network node and then out onto the external network and into the physical world. And then obviously all of the nodes are connected via a management network. This is, for example, communication between the agents and the main quantum server process. So again, not answering all of your questions, just want to give you a high-level idea. So there's two main models that we see people deploying quantum in. One is to say that I actually only want the administrator to define network connectivity. But that administrator needs a lot more flexibility when building the network topologies for its tenants. And it needs more flexibility than what Nova gave it. Or they want to use a different back-end technology than what Nova has. So in this case, it's a kind of operationally speaking, it's a model similar to maybe what you've done with Nova Manage in that administrators are defining the networks and then tenants are deciding what networks to plug into. The other model is the true kind of self-service networking model where you're actually exposing the quantum API to tenants. Tenants can make the calls to create network, create subnets, et cetera. So those are two different models depending on what your strategy is for your cloud deployment. And obviously you can mix and match these. For example, a common thing to do is for admin providers to create a default setup on behalf of the tenant. So the tenant shows up and they already have a router with maybe one network behind it. And then the tenant can decide, oh, I actually have this application that needs three networks behind this router. I'm going to go add a network and use this subnet, et cetera. So I'm going to move through this stuff pretty quickly. But this model basically maps to exactly what the Nova Flat or Nova Flat DHCP manager does. The point of showing these diagrams is to show that you can do all of the topologies that Nova supported. And you can actually do a lot more in terms of kind of taking these base constructs of networks, ports, and routers, and matching them together. So you can do a single flat. You can do multiple flat, for example, if you have different classes of applications. But there's only a couple. So maybe you have your production here, your test and dev here, but you want those isolated. This is one we see as well, which is basically an extension of a service provider that originally just provided the flat model, but then wants to kind of ease their tenants into also being able to use private networks. So a VM might have an interface on the shared public network, but also have an interface on a back end network for back end communication. Or you could even, like the VM on the far right, you could even have VMs that are completely disconnected from the public network and just connected to private networks. So again, you get the idea all we're doing here is taking the core API constructs and kind of rearranging them. So this is one example now, once we're actually pulling the layer 3 router in. This maps to what Nova network does with VLAN manager, which is there's conceptually one router, each tenant gets their own network behind it. And then tenants can allocate floating IPs from the public address space and map them one to one to a virtual machine for public access. You can also have an even richer model where tenants can create their own layer 3 routers and have their own isolated networks. This is what, for example, lets tenants create networks with overlapping IP addresses that are still connected to the internet. This is a model that maps more closely to something like Amazon VPC or CloudStack's advanced networking model, if you're familiar with that. And I say per tenant, but really they could be per application, tenants can have multiple routers here. So a couple slides looking forward. So where are we going from here? As I mentioned, how I view the work the quantum team needs to do in Grizzly is kind of dividing the two different areas. First, we need to close some gaps. There's a couple things in terms of being able to play nicely and really replicate all of the existing Nova network functionality that we still need to tackle. In particular, security groups in the metadata service. Nova's model, as I said before, was that no IP addresses can ever, ever, ever overlap. Quantum actually supports overlapping IPs. So if you're trying to use Nova's security groups or the metadata service with overlapping IPs, they're actually incompatible. So right now we basically have an option to let you disallow overlapping IPs if you choose to use these services. But in Grizzly, we're obviously going to tackle that. There's also the notion of a multi-host, which is not to be confused with being able to have multiple compute nodes. This is something in Nova where you can actually run the DHCP and the basic NAT capabilities on the compute host itself. And this can be nice for smaller deployments, just in terms of a simpler HA and kind of scale up model, as opposed to having pushed traffic through a network node, which could be a single point of failure or a bottleneck. So the second half of what we're focusing on and what people are upstairs arguing about right now are more advanced services like low balancing, VPN. How do we start letting people not just create VMs that do these things and plug them into topologies, but how do we actually give people APIs so that they can orchestrate programmatically advanced network functionality? OK, so I want to tell you about a couple other talks. I think one of these might have gotten moved and is now done. I forget. But these are all people who are giving talks about their OpenStack deployments that had deployed quantum on SX. So I think to varying degrees, they'll be talking about quantum. I know the NICER talk and the eBay talk, in particular, will be pretty quantum focused. I expected to come up in the Rackspace and Dreamhost talks as well. And again, several of these deployments have actually been running for six months plus now in production with quantum. So again, that was kind of the bleeding edge people. Now we're to the point where it's the core release of quantum, and we expect the set of people running it in production to go up a lot. So last slide, just key takeaways. So again, quantum exists for two purposes, to bring advanced networking to OpenStack by first exposing rich network APIs to let tenants build complex networks and services that map to the services that they could have deployed in their own enterprise data center. The second thing is it has that plug-in architecture that lets you leverage different back-end technologies to solve. You get to pick whatever technology you think is going to best solve your cloud networking problem. So like I said, Folsom is the first release where we are considered a core OpenStack project. We're really excited about it. We've been hearing from a lot of people who are planning on putting quantum into production with Folsom. So that's really exciting. I think it's going to represent a really big jump for the project. And finally, if you're interested, if you're a developer, you're interested in networks, come talk to me. We're always looking for new people on the team. So thanks. And Ron Burgundy, I couldn't be in San Diego without making a reference. All right, I think I finished with some time for questions, because I talked really fast. So any questions? You're walking through the workshop tomorrow. Thursday. Thursday. Is that going to be open-v-switch-based? Yeah, it's the entire open-source stack. It's all based on open-v-switch. So right now, are you saying if there was a plug-in that used OpenFlow, potentially a vendor who just did hardware could just sell an OpenFlow switch? Yeah, so that is true. So most vendors actually write plugins that are specific to their technology at this point. There's definitely a scope in the plug-in mechanism for there to be more of a driver or other self-bound interfaces that are meaningful. I think OpenFlow would be one attempt to do that. But in practice, I think that's actually going to be somewhat challenging, in that the plug-in is going to have to make assumptions about what OpenFlow version you're using, what table sizes you have, et cetera. Yeah, so there's no actual rule that a plug-in has to be single-vendor. In fact, the Cisco plug-in right now actually is able to speak to multiple types of switches. So it will actually configure the UCS switch and open-v switch that's running on a hypervisor. So there's no plug-in is not necessarily one vendor. Think of plug-in more as a strategy for how you're going to map logical networks to actual packet forwarding. And so one example, so people have moved and no one's ever done this because I don't think it's actually that useful in practice. But people have said, well, what if I just have a plug-in that's the VLAN plug-in? And I can go talk to any vendor's physical switch and know how to configure a VLAN on it. That would actually work. That would be more of a driver model. The overall strategy for the plug-in is I'm going to use VLANs to isolate logical networks. But I could talk to different types of technology. So no, no, no. So if you query slash extensions in quantum, you'll get a list of the extensions that are supported by the plug-in that are currently running. Yeah, yeah. So yeah, it's something we've talked about with. Yeah, are there questions? All right, well, thanks. Happy to chat with you afterwards.