 OK, thanks very much, everybody, for coming back. So what I'm going to do is I'm going to talk a little bit about our journey. So first, I'll give you two seconds on what our means. So my name is Don Bolman. I am the CTO of a Sandvine. Sandvine's a network equipment manufacturer. So we're used to making things that look like the box in the upper right corner as physical hardware. And what I'm going to do is I'm going to talk a little bit about our journey of how we've gotten to where we're at in NFV, specifically with OpenStack, and some of the things that we found successful in getting that launched inside the company, and some of the things that we found challenging and what we're doing around those. So today we make custom hardware equipment, and it's pretty high scale. So that box is two rack units. And it has four 100 gigabit interfaces, 840 gigabit interfaces, and 810. And that's a lot of bandwidth. And this is something that OpenStack has not yet great at, is really, really high bandwidth. There's a lot of work underway with SRIOV and PCI pass through and so on. But this is something that a lot of people in my company were quite skeptical. They said, well, this will never work. It'll never have the density, et cetera, et cetera, et cetera. And the second thing is, we've been in business for about 15 years now, and we have a lot of legacy. We have a really big lab environment. We have a better part of 10,000 servers doing traffic simulation for this thing. So we need to simulate the public internet in order to test our equipment, our boxes sell to consumer internet providers. So I need something that can generate a million users doing web page downloads, and SIP, and peer-to-peer, and stuff like that. So we've built something that is pretty reminiscent of OpenStack. If you think about OpenStack in terms of ironic for bare metal pixie boot and neutron, we've done that ourselves. So there's a lot of people saying, well, why would I bother switching? I already got all this. It does everything. And it's highly automated. Our lab, literally, nobody goes in there. The doors are locked. It lights out. And except for dead body collection, it's all it does. And the developers sit at their desks around the world. We have a couple of different development centers. And they use that. And it's dual purpose. So during the day, I have people using it for interactive use to test out the latest and greatest software. And at night, which has become an increasingly slim slice of the world as we've added more development centers, a whole bunch of automated tests kick off. And they test all our branches and all of our stuff. The problem is, we probably would have done nothing. If it weren't for the fact that our customers, who are consumer carriers, AT&T, Comcast, that type of company, came along and said, you know what? We love your stuff. We just want to see it repackaged under NFV. I'm sure that'll be a simple transformation. Thanks very much. So that caused a little bit of focus in the area when your paycheck is telling you that they want to turn left and you're headed in the direction of right. Then you start to listen a little bit. The other thing that's happening is our business has been getting more decentralized and more dynamic. I mean, that's true of everybody's business. But we have SEs everywhere on Earth doing demos every single hour of the day. We have customers in 90 countries today. I can't ship demo equipment to all of them. Some of them, it's highly complex. You need to apply for expert control permissions. Other places, people just don't want to go. We just recently won a customer in Sierra Leone. And a lot of people didn't want to go to Sierra Leone with the Ebola thing going on. So I've got to be able to do my demo somehow. So everybody else does the same thing I did. You build your cloud. So the other thing that happened is we launched a new product. And it's called our Cloud Services Policy Controller. And this sees us as providing value in an open stack environment to business services. So suddenly, there's a whole lot of interest in cloud. And everybody's sort of sitting on the fence, will it work? Will it not? We make this box. It's really fast. I don't think it'll ever work, but we need it, et cetera. So we decided to do what you should do in this environment is you just close your eyes and jump in the deep end of the pool. So did a bunch of demos sort of throughout 2013. Showed our stuff running under Amazon AWS. That was really tough. The AWS environment was highly hostile to devices like ours that deploy on Layer 2. And at that time, it used FreeBSD. It was really difficult. So I said, well, let's try a different approach. We used the Try Stack for a bit, installed an open stack, and said, all right, this looks like it's got some legs. Let's go for it. And we created our own private cloud because I needed to make some changes. And we called it Nubow, which is Esperanto for cloud. Highly original. So the process, the first thing we did really was ready fire aim. And I did that on purpose. You know, if we sit and think and think and think and think about this, well, actually, open stacks releasing faster than I'm ever going to get there. And I'm always going to fall in love with the feature that's in the next release. We'll never pull out of the station. Let's just whack up a system and start mucking with some ini files. The worst that can happen is it'll crash. But I'm the only one using it. Nothing really bad is going to happen. So I didn't overthink it too much. We just jumped in and we learned. Brought some people on the platform, gave us a bunch of feedback about, hey, that sucks. And fix that. And you burned a couple of your first users. You went and asked more people and told you lied about the first guy. Yeah, he had no trouble. This really works, by the way. So we did some experiments with the public cloud, AWS Rackspace. It just didn't really work for our environment. I'm not saying it doesn't work for anyone or it'll never work at all. I'm just saying just for what I wanted, I needed to be able to modify some things that they just didn't want to give me the keys to. They were really great for demos. So people loved seeing that in Amazon and in Rackspace specifically. So then we just went out, secured a little bit of space, found a guy in town that had a couple of rack units free and set up shop. Spent a couple of bucks. And then we spent about a month standing up the server and tweaking it, et cetera. And then we did a soft launch. So we have about 100 people in our sales organization, and a bunch of them are our systems engineers. And they're kind of a captive audience. They've got to do what we say or they don't get paid. So I said, you're coming here for your workshop. We're going to run you through this a little bit. And we just sort of launched things at scale with them. The next thing we did is we onboarded our customer training team. So within our company, our customer facing training also does all the internal training. So whenever I hire a new systems engineer, they run through basically the same type of process that customers do. So I brought them on board. And I said, well, what are your problems? And their problems were, well, I couldn't even write them down fast enough. Right now, we force our customers to VPN through our corporate network in order to get access to the remote systems for training. That sucks. And that's always down. And that sucks. And don't battery back up on it. And that sucks. I don't have enough capacity. And it just kept going on and on and on. All the stuff that Elastic Cloud is good at solving was their list of problems. And I said, well, great. Have I got the solution for you? We're going to, I'm lighting the system up. Let's get your first couple of courses on here. And they were a little skeptical. But really, they had nothing to lose because it was hardly working otherwise. And then the other thing we did is we drove adoption with demos. So all of our account teams are motivated by money. They want to sell more stuff. They can only sell our stuff. And the best way they can sell it is to demonstrate it and make it look really easy and powerful. And they said, well, give me some more demos. And I'll go out there and I'll hit the road and flog it for you. And boom. And then something really surprising happened. In those demos, the customer said, hey, your product's really cool. I want to buy some of that. But also this open stack thing. I've got a project that's going to do that. Or somebody else owns that. And that guy's really slow, or et cetera, et cetera, et cetera. And they talk about some of the things, service flow chaining, service function chaining, NFV, SRIOV. And they said, well, I just really need access to a system that's already working so I can prove it some theories. I can prove to my boss I can get budget for next year, et cetera. We just started handing out accounts as part of the demo process. So you'd come in there. You'd show the customer our product. Afterwards, everybody in the room who won't put up their hand. You gave them an account. And then they tried it later. And that became a lot of our customers. It became their lab while they were in the process of building their own lab. It didn't really replace it. And that really worked well. The multi-tenancy aspect of open stack, man, I didn't really need to worry about customer A and customer B seeing each other because it's sort of firewalled off. And about six months later, we have about 450 tenants. We took an interesting strategy here. Generally, one user equals one tenant for us because we didn't really need to share them. There are some that are shared, like sometimes a customer. Like a five people in the room, we give them all one tenant. And sometimes we don't. But it's worked out that way. And we have about 2,000 instances that run at any given point in time. I know a lot of you run much bigger systems than that. But that's what ours is. But one of the things that was unique about ours is we run about 12,000 networks. And that was really what a lot of the problems came about. And that comes, I'll show you in a slide or two, the wiring. But this is an area that I struggled a little bit to get the tuning and so on, particularly when in conjunction with one of our real heroes of the deployment, which is heat. When you hit heat, delete stack. And it says, OK, deleting 15,000 things, bad stuff happened every single time until the fixes were made. So talk about some of our choices. There's really nothing revolutionary in the list of choices here. Remember, I jumped in both feet with my eyes closed so there wasn't a ton of planning that went into this. I think this is the right approach for most of you. But we picked Ubuntu 14.10. In hindsight, I might have picked 14.04. There was some less than long term stuff in that one. We picked KVM, really no shocker there. I didn't have any budget after all. I'm buying this on eBay on my credit card. We picked Ceph for the volume management. So as I acquired my hardware, we picked a Dell M1000 blade enclosure and all the blades that I chose were Dell M620s that are off lease or what have you. And they all came with some shiny piece of hardware with some grotty hard drive in it that I didn't want. So what I did is I chucked all those hard drives in a big pool on Ceph. And I replaced the other slot with an SSD that was fast. And that worked out really, really well for us. So our boot volume is super fast. It's SSD, got great concurrency. But I have a big pool of reliable storage on Ceph that really cost me nothing at all. And so the odd person that wants to do something big, it doesn't bust me at all. We used Cinder for the boot. And that's gone fantastically well. I've never had to look into that one. And of course, we use Nova. We use Neutron networking right from the start. So we never had any legacy. The system initially came up on Havana. It run Juno now. And I'll move to Kilo in a couple of weeks. We more or less skipped Icehouse. And we used ML2. We used VXLAN. The Dell M1000 chassis that we chose, it's really fantastic for this type of environment. It has really, really great networking in it. So each of the blades has 6 by 10 gig to the back plane and multiple switch fabrics. And I just didn't want to muck around with VLANs. I need far more networks than I can get with VLANs. And I wasn't overly concerned about the throughput on this system. So I don't have a hardware VTEP in there. So it's VXLAN. I lose a little bit of performance there. But it's just simple. And then, of course, we use OpenVSwitch, which has also worked out pretty well. And Horizon. So people fell in love with Horizon. I mean, if there's anyone from the Horizon team in this room, it's really nice. Because you had a bunch of people that aren't going to be using OpenStack for a living. It's just part of their job. And they really don't want to sit down and learn something arcane and complex. They want to open a device. And I don't know what that device is going to be. It's a tablet. It's a phone. It's a laptop. It's X, Y, Z. And it's worked for all of them. And we literally have people today who are doing their demos off their phones. They walk in the customer's room in the other Chromecast or to the screen, which sometimes happens, where they plug in an HDMI cable to their phone. And literally, they do their demo that way. As one of the guys on my team, it's how he does all his demos. And he brings up the Horizon interface, pinch, zoom, pinch, zoom, click, click, boom. And people fall in love with that. So we really, really, really like that tool. And Heat. I'll talk a bit more about Heat. But Heat has really been the star for us. It turned to what was a relatively complex setup of our system, not so much OpenStack itself, but my system into something that people could just get right every single time in front of the customer when they're under fire. And that really worked for us. We initially deployed Solometer, and then I undeployed Solometer. It absolutely did not work for us in any way, shape, or form. It really got confused by the highly dynamic nature of our environment. Networks are coming and going and coming and going. And it was creating stats that would only record one bin for this interface. And it was gone. And it just didn't work at all. And with 12,000 interfaces, it couldn't get them pulled in time. So we gave up. So our current architecture. So again, this isn't a real big stack. It's growing a fair bit. And it's going to grow a bit more this year. And then we'll have to face whether we want to move it into different regions and start doing stuff with Nova cells and aggregates and all those other scaling factors. But right now, it's pretty simple. So we have multiple compute nodes all running Nova, KVM, OBS, Neutron ML2 on Ubuntu 14.10. And then it all sits inside a Dell M1000E blade enclosure. So for those of you who haven't seen that, it's a 10RU thing. It makes a bit of noise. And each of my blades, so it's got A1, A2, B1, B2, C1, C2 switch fabrics. I just use the A and B fabrics. So each of my blades has 4 by 10 gig to the back point, which is enough for our application. And most of it was actually acquired on eBay. Believe it or not, companies that are going bust or off lease or whatever, you can buy the stuff for not a lot of cents on the dollar. And it comes. And then what we did is we put one SSD per blade, which is dedicated, and then one spinning disc per blade, which is the Seth thing. It was kind of a freebie that was thrown on the side there. And I didn't have a lot of expectations going in for Seth. And Seth has really worked well. I mean, Seth was not too complex to get going and has been pretty reliable. We haven't had any outages on it. And it just works for the people that are using it. I will have to put in a better storage system as we're starting to do some more work as we bring on Sahara for some of our big data stuff. It just won't have the performance with the discs that I've got in there. But for now, it's gone pretty well. It avoided me having to buy a custom disk tray for the system, which I was considering. We have about 500 vCPUs online, so 500 cores, and about two and a half terabytes of RAM. And that's enabled, effectively, my entire company to do whatever they need to do, whenever they need to do, including their customer-based training. So all in, maybe $65,000, $70,000 on this. It cost me about $1,000 a month to run it. And it's made a big difference for us. It's saving us probably its entire cost on a per-month basis. I mean, it's made a big difference for Sandvine. In terms of the number of demos we do per day, the number of sales conversions we do, just even the customer training, my alternative to this was multiple hundreds of thousands of dollars of capex on training for each training rack that was dedicated. I had no elastic nature. I couldn't share it. If there was two courses going on, I needed two racks. With customers in 90 countries, there's always two courses going on at every hour of the day. And then we've got a 10 gigabit internet uplink, which means that it can get data in and out of everyone around the world at a pretty good clip. One of the things that I wish I had a bit more of, but we've got some solutions for, is I only have a slash 25 of v4 space. So I know a lot of you that run clouds, that isn't even enough to get started. But for us, that hasn't actually been a big deal. And let me walk you why in a second. I currently don't have any HA, so if it crashes during the middle of this presentation, then I'll probably get fired. So the real heroes here, what drove the adoption? So heat. Heat just made it fast and simple. If you're running OpenStack and you're not running heat or one of its equivalents, I'd really recommend you look into it. It gets the job done with not a lot of fuss. People with one click can get something that's really complicated going and feel like they know what they're doing. We made 100% of the service available via SSL. So the reason for this is my use case is I have someone who's going to fly to some random location and sit down in a customer room. They've got five minutes notice, and they need to get something up on the board. And if I needed to use anything other than SSL, like GRE, IPsec, anything like that, people have retched firewalls out there. They're absolutely poisonous. You sit down, and the guest Wi-Fi goes through a proxy server, et cetera like that, and you end up just basically screwing the meeting. So this has really worked well for us. So all of the access is via SSL. We invented this thing. I call it Sandvine Auto-Config. Our system's a little bit complicated. It's got a lot of config in it. It's got a bunch of files all over hell's half acre on a bunch of different virtual machine types. And people needed to be able to sit down and reliably configure our system for the demonstration. So the first path I went down was snapshotting. All right? I'm going to set up demo A snapshot, demo B snapshot, demo C snapshot. And I quickly realized that by the time I got finished to demo D, I needed to update some software in demo A, because we'd released a new version of it. So I was just constantly spending my whole life going through updating all these snapshots. And I said, well, there's got to be a better way. I'd love to marry the config to the instance after it boots, so after the QCAL boots up. And there's lots of different things out there that people are using today, like Chef and Puppet and so on. So what we did is we actually made it so it works out a Git. So you log into our system, you use our GUI, you set it up exactly how you want it. And you just tell Git, make a repo, save that. And the next guy can come up and he says, bring up the standard images and put that Git repo in there. There's a little bit of swizzling we did. Git has this thing in it called Smudge. It's kind of a cool name. But it allows it to sort of swizzle. As you check out a Git, it can make some local changes for your local IP and your instance name. It's all automatic. And that made it really easy for us to have a library of demos that grew at a different speed than the number of images that we needed to maintain. One thing we really worked hard on was remote access. 100% of my users are using this outside my corporate network, outside their home network. They're using it in hotels. They're using it in airports. They're using it with their phone tethered. And I really needed to make it simple to access. But the second thing is, going back to what we do, because this isn't about OpenStack. This is about Sandvine making money, we make a box that needs to see consumer traffic go through it. And I needed a way that when you're demoing in that meeting room, you could open your laptop and become a consumer on an ISPs network with Sandvine in it. So your laptop, which is located in Kuala Lumpur, needs to be layer two adjacent suddenly to your setup inside OpenStack. Not your neighbor's setup. Not somebody else's tenant, but yours. So we invented a technique to do that that didn't consume any public V4 addresses. So we invented a technique that allows trivial SSH access and VPN access for the purposes of having your laptop go through that system to get back to the internet so that it looks like you're directly attached to it. The next thing we did is we have a bit of an ecosystem of our own. And what we did is we bought a vanity domain named manfee.net. It's a funny story how that came about. But we bought this domain name. And we've used that as a common platform to bring some of our partners onto it. And then our partners also fell in love with the system. They had similar problems that we did. And now they've got a platform that allows them to do their demos easily, cheaply, and reliably. So now they're out there doing their demos. And every time they do, my brand's in front of the customer. And we have common customers. And they're demoing with our product in theirs. And that's really worked well. And then moving the product training onto the platform has driven our utilization very, very quickly. So now it's the go-to thing for everybody in the company. In only the course of about six months or so, every single day you walk around my office and you see this system up. And it's made a big difference for us. So these were our heroes. Why did heat matter? So this little diagram here, this is more or less one setup for one user. So this is what you need to set up. So Sandvine makes three main widgets. Each widget runs as a separate virtual machine instance. We have our, it's called our SPB or S-E or PTS. I won't bore you with the specific details of that. The PTS, the policy traffic switch, it's in line in the user's data. It deploys in layer two. So what I did is I invented this concept. I call it a VPN. I know it's revolutionary. And it's just a standard Linux box. It's running Ubuntu 14.10 as well. And it has two bridge groups. The first bridge group goes in and out of the PTS and goes nowhere else. That's really important because that allows my staff to do TCP replay. So they can take a capture from either a customer site or somewhere, and they can replay it in here and not worry that they're overlapping with somebody else's IPs or whatever they want to do, broadcast domain, unicast domain, multicast, I don't care. They can do whatever they want. It doesn't leave the box. It's just, it's isolated there. And the second one is there's a way that you can get into that Linux box. And it'll turn around and route your traffic through my PTS and back out to the internet. So to do that, I have two routers called C Router for Control and D Router for Data. And they run through the PTS and so your next hop route. And I run a service on the controller. And when you connect to my controller via SSL with a domain name of tenant.instance.vpn.samblin.rocks, it jumps into the namespace of that Neutron router. So I'm using IP namespaces. So what it does is it snoops around in Neutron and says, oh, this big fancy GUID is your router. Let me join that namespace and then make it your next hop route. And suddenly your laptop with SSTP or PPTP, it's like it's directly connected to that router. And the same for SSH. You can SSH to your instance at Nubow. And it goes straight there by jumping into the namespace. And that meant I didn't have to chew up my slash 25 IPv4 space and give floating IPs to everyone. And one of the other reasons I was nervous about the floating IPs, not everybody's perfect at setting up firewalls and servers. So I got a system with a 10 gigabit uplink to the internet. And I got 500 people that are using it daily that I haven't really met. And somebody's going to put something on there and get owned. And that actually happened to me last week. And it was a bit of a pain in the ass. And then the other thing I did is I put an Android x86 system in here. And you can use that through the Horizon VNC web interface. And that also goes through my PTS to the internet. So people can demonstrate stuff with a mobile phone if, for some reason, they don't want to VPN their phone there or what have you, they can just bring that up and run Google Chrome or what have you. Without heat, I would have spent my whole life on tech support. Log into the Horizon, go to the Neutron, add network 1, add network 2, add network 3, 4, 5, 6, subnet 1, subnet 2, connect this to this, create a router, create a router, add the interface to the router, add the default. No, don't add a default route to that router. OK, now add image 1, join network joint. It would take forever. And you don't want to spend your time in front of your customer pretending to know what you're doing and ultimately not. You want to constantly demo your product. So with heat, every single time, click, boom, set up. Always the same 100% repeatable. And then the other thing with heat is the resource group thing. You can just set the number of instances. So the customer says, well, what about two PTSs? Set n equal to. What about a 500? Set n equal 500. And that goes back to the problem of the tear down. It was a little bit tough on the system. That was a bit complex for us. And that really heat automated that for us. So it was a real winner for us. Some of the big wins, how did I drive people within the company to get excited about this yet another platform in a company that has yet another platform every couple of hours? One of the ones, we have our own management GUI. And they'd always struggled testing it at decent scale. So some of our customers have a couple thousand of our elements network-wide. And the 14 that was developing this, they rarely had access to a network that big that they could test their own thing at scale. It's talk of writing simulators, but simulators are never all that accurate, et cetera. And it was really easy on this system to set n equal to 500 and boom up to 500 things. And they're real. And they were able to do their testing. And they were quite happy with that. So now they're singing the songs of love. Multiple customers who didn't have their NFE lab up, this was an unexpected win for me. I would go in to some group within a large carrier that has way more resources than I do as a whole, but that doesn't mean any one team has more resources. So I go into a company that's got a $20 billion a year IT budget that doesn't mean the person I'm across the table from is able to spend any material amount of that. And they were really excited and said, well, we're working on learning this ourselves. No problem. So now that's really worked well. And this led to direct sales for us, which was quite good. The sales via demo only in complex geography. I mentioned the Sierra Leone example earlier. That was a real case for us. We've now made money out of a place that I wouldn't have been able to do business there. I would have had to apply for export permission in order to send any equipment there. I would have had to get a local partner. I would have had to sort of team viewer through with heaven forbid the customer bought it. Then they would have wanted training. I would have had to send one of my trainers there. And it sucks. You feel bad putting your staff in harm's way, but it happens. So that's really worked well for us. It's allowed to get some extra cash out of there. We've also had some customers that this is the only way they saw our equipment prior to sending the purchase order. We just said, look, here you go. Here's a login. Muck around with it. See if it's the way you like it. And then the cash register ring. I wouldn't say that's happening every half hour. But it has happened. And it was quite good for us. So what they did is they're mobile carriers. So if you know what a mobile carrier is, they have a special device called a GGSN, which is the GPRS gateway. And what they did is they configured a special type of VPN in there called an APN. So a certain group of users' phones flowed through our system, which happens to be located in Canada. And it was as if they bought our stuff. It was just a little bit remote with a little bit extra latency. The other big win is the reliability and the scale. So everybody bemoans every time there's a two-second outage on the system. But in reality, the thing that's replacing is I would almost all the time. So they do come around in that afterwards. They're in your office bitching about the four-minute outage that was yesterday, right before their big demo. But they secretly say, well, the other system, my alternative, is out about four hours a week. The remote access have reduced that backlog in support. So this is the other area is my customer support team is not a backlog. I have X-mini trainers. I have Y-mini hours of training to deliver per week. They spend time to travel there and so on. And actually, my amount of hours of training to be delivered was growing at a faster pace than I was retiring it. And that's good when you're talking about selling equipment. It's bad when you're talking about selling service because you have customers who aren't trained on your system who then turn around and making more support calls. So by my support team or my training team becoming more efficient, they're reducing their backlog. They're able to more effectively train more users per dollar. And that's saved us quite a bit. And then the partners demoing without our direct engagement, that's really been fantastic. So now I've got a partner there in doing a demo. We're part of the demo. We wouldn't have otherwise been. I'm hosting the platform that's costing me absolutely nothing to do. The fraction of my $1,000 a month that goes into supporting the partners is not even folding money. So I'm pretty happy to run the hardware for them. The challenges that we had have and probably will continue to have. Neutron and Nova, they're not primarily NFE. There's been a lot of mindshare come into this specifically with the open NFE group has driven a lot of it. The carriers have driven a lot of it. But there's certainly still other competing priorities in there. And you often see, for example, in Juno, there was a regression where it was made so you couldn't have a network with no subnet on it. You couldn't detach it to an instance that wouldn't come up. And that was a problem for me because I just showed you. I've got a Layer 2 thing that deploys with no interface on it. So that happens periodically. And so we disabled port security. The good news is in Kilo, they fixed that. So I'm looking forward to the install there. I no longer have to turn off port security globally. But that was an example challenge there. VLAN Trunking is another challenge. I mean, this is just sort of a bit of a square peg round hole in a gap. My system is VXLAN. I should be able to do whatever I want. I should be able to create arbitrary packets and send them through. And I can, with the exception that if I send a pack or the VLAN tag on it, open V-switch just eats it, drops it on the floor, and throws it out. The reason it does that is because Neutron doesn't configure the classifier in it to allow tagged packets to go through onto the VXLAN. So I consider that something that needs to be done. Heat really stressed us on scale. I loved heat. I love heat. The ammo files are great, simple to express. But it allows people to do something that my system can't consume at the speed that they ask for. So we added a bunch of patches. There were some race conditions that I found. The community was fantastic. Posting on ask.openstack.org, people would give an answer. Posting on the dev mailing lists, and somebody would come up with something, opening a ticket, launch pad, et cetera. It all really worked. So I'm pretty happy with that. But we really stretched us on scale, particularly in Neutron. We had a lot of problems in Neutron where you hit the delete on something big, and then you ended up reading through log files for the next couple of hours. I showed earlier I've got that really neat, totally isolated loop for people to do TCP replay. Some people didn't get the memo on that, and they send it out to my other interface, and it gnats to the internet, and my NAT table blows up. Nats suck, but there's nothing I can do about it. I got my slash 25. So I'm going to have to keep a monitor on that. And salameter. Salameter didn't work with our dynamic environment. We're a little bit rare here in that our instances don't live very long. People are creating them and destroying them all the time. So I've got this very short lifespan, and we're doing about 200 an hour, and they have a lifespan of about an hour. And we have a very high network count. And salameter, it needs work to get there. Things we invented in order to make this work, so we invented a way to do direct access. So I've got a GitHub thing under my personal name. I keep meaning to put it in the company name, but I keep forgetting. But you can see this online if you want under GitHub Don Bowman. You can SSH directly to any instance in our system without using a floating IP. And if you're a cloud operator and you want to give that a try, let me know how it worked for you. But that's worked great for us. Everybody can directly access their stuff without it having a public IP has really, really worked well. It's made people pretty happy. Direct HTTP access is very similar. So a lot of the things we do for demos, somebody's just putting up a web page. And I didn't need to get them a public IP in order to do that. They just have a special URL. It's tannant.instance.vpn.send.rocks. Boom, it joins the namespace of their router and hits it from the public IP. No floating IP needed. That's meant that I've got a lot less of those sort of DOS attacks. And attaching the remote machine to the Aller2 network with PPTP or SSTP, that's also really worked well for us. It's not the same as VPN as a service in Neutron. That's something different. I needed to be able to pin it to a specifically or two network, not generically have a VPN. That's why I wasn't able to use that. And again, no floating IP. So what's next? Well, people want more. They're never satisfied. They want to zone with ESX as well as KVM and probably also containers and et cetera, et cetera, et cetera. I need some pass-through and SRIOV. So right now, I haven't really cared too much about performance in here. But obviously, we make products that we care a lot about performance. And people want to see a demonstration with Nova Scheduler figuring out which machine has which resource on it, et cetera. So right now, we're all VhostNet. Open Mano. There's a lot of Etsy, OPA NFE stuff that clearly I'm going to have to get onto the platform. And I'm kind of dreading because it would be a lot of work. I need a higher performance stuff. I mean, as we start going into more big data, people are going to use this for more than just storing some movie they downloaded or something. I need an HA controller. I'm living a little bit borrowed time. The system's been perfectly reliable. But all systems are not all reliable for all time. And then Sahara is sort of on our roadmap as part of our big data exploration. And with that, we have about five minutes for questions. If anybody wants, you can step up to the mic. Shurp, shurp, shurp. Crickets in the room. OK, well, thank you very much for your time this afternoon. And I hope it was somewhat educational.