 Hi, good evening, everybody. Thanks for coming out to the event today. Hope you're all enjoying the happy hour. And thanks for taking some time at the end of the day to come to our demo. Just for some quick introductions, my name's Jim Fagan. I'm the president of managed services for Pac-Net. And with me is John Vesteluz, our vice president for product architecture. For those of you who are not familiar with Pac-Net, basically we're an Asia-Pacific-based telecom provider that own and operates the largest subsea and integrated data center network in Asia-Pacific. And I think what's a little bit different is you actually have a carrier here talking about OpenStack. And so what we'd like to share with you is how we're actually enabling our network to turn it over to you so you can make your OpenStyled Clouds more effective in Asia-Pacific. So all the talk is with OpenStack. Everyone's been talking about federation and how you actually bring clouds together. And for those of you that have operated in Asia-Pacific before, that's actually doubly hard if you actually look at the amount of just sheer distance, if you look at the licensing that you need to actually do networking, if you look at data sovereignty. So Asia creates a unique challenge. And while everyone here has done an amazing job in the data center with the application, virtualizing the environment, turning it into pay for use, all the stuff we talk about cloud, if you look at a hybrid cloud, at some point it's got to hit a network. And if you look at it today, us carriers have for traditionally, I'd say up until today, have actually really kind of ignored all this. We're rigid. You guys build great burst capability, scalability, the ability to move workloads. And we basically say, hey, take 15, 20, 30 days. I'll hook up a private line for you. Give me the amount that you wanna use. Here's what you're gonna pay. You got it for the next two years and you're done. That really doesn't work. I've heard a lot of people talk today about their network challenges. So we looked at how do we actually take our network? How do we actually take our data centers? How do we take our technology and turn it over to our customers and turn it over to actually cloud providers so they can take our network and use it the way they wanna do it? So really what we've been able to do with the help of our partners, Mirantis and Velo, with Velo we're actually extending their controller into our network. And where Mirantis is helping us out is really on taking this software technology, the software defined networking and actually creating an OpenStack enabled plugin. So with that, we'll actually be able to take this plugin, bring it back to the community, to the OpenStack community. And again, really transform this network which is really allowing you to basically deploy your clouds anywhere in Asia and then on private network, pay for what you use, pick the type of bandwidth you want, schedule jobs and interconnect anywhere in Asia. And that's coming out of the US and LA, going into Hong Kong, Singapore. You can actually go into China, down to Australia and all the way to India. So we think today's the day that carriers are finally gonna say, hey, this is how we actually have to play in the cloud world and we're really proud to be here today and be able to partner with OpenStack and hopefully turn over a really powerful tool for you to make your technology better for your customers. So now let me turn it over to John and he'll take you through actually the fun part on how it works. Thanks. So, as I get screens arranged, so what we're gonna do is we're gonna walk through a live demo of creating a trunk between Hong Kong and Singapore. And this is running on a real network. This is running over our sub-C infrastructure. We have an OpenStack stretch cluster. Controllers are in Hong Kong. We have compute nodes in Singapore. In the use cases, for some reason we've taken the network down. We don't need that capacity right now and now we're bringing that capacity back online. As Jim mentioned, we're working with Morantis so you can see the fuel dashboard and you can see that we have these nodes at the bottom here that are currently in Singapore and we can't reach those because this trunk is down. As we go through this demo, please keep an eye on this and you'll see that these come back online as we bring up this trunk. Okay, so when a user logs into the system, the first thing they would see is kind of a blank canvas. And as a network engineer, one of the things we do when we first build a network is we go into Vizio or some drawing tool and we actually create a network by drawing between icons and creating a virtual network and we've replicated that environment in this system. So the first thing we would do is add the Hong Kong data center in the Singapore data center and we would connect those two data centers together. And then within that link, we now can create flows, right? This is open flow enabled and we can have multiple flows across the link and now we're going to create those flows. In this particular case, we're gonna create a trunk. We have our switching fabric on both sides of this link and we do the same concept of connecting these two together. At this point, we're now giving control back to the customer to provision that bandwidth and that virtual circuit, if you will, the way that meets their particular needs. And since it's flow based, we can do this at various levels, at the application level, at the port level, at the VLAN level in order to create this flow. So we can give it names, we can give it things that are useful to me as the customer, not our typical circuit ID, which is kind of an unintelligible. We then concept a bandwidth to what we need for this particular task and we'll set up a 300 meg flow initially. We then give the customer the ability to, the end user, the ability to control the performance characteristics of that circuit. Some applications may need a low latent circuit, think of high frequency trading, things like that. Some applications we may be able to take a longer route, email, bulk file transfers, things of that nature where we don't need that shortest path so the customer can come in and choose which path they're using, and notice that we're giving the customer the pricing information in real time so they can choose the best path for performance that meets their particular needs in that application. We then allow the customer to change the duration of that circuit. So a new concept, when you order a circuit from a carrier today, it's 12 months usually, but you may only need that bandwidth for a shorter period of time. So we give the ability to take the bandwidth in one hour increments, however you need. So we'll set up a one week flow and then we actually debated this a lot. The question was at the end of that flow, what do we do? Do we just turn off the flow? Do we auto renew it for the same terms again? Or do we give the customer the ability to take it hour by hour? And the idea with the hour by hour is you think it's a week, but if it takes a little bit longer you don't want to kill that flow in the middle. And going hour by hour gives you some time to go back in and finish any longer jobs. Traditionally, carrier takes 15 to 20 days optimistically to provision a circuit. We have cross connects to put in, we have to provision the bits in the middle, we have to get sales orders signed. What we're doing here is we're instantaneously effectively creating that flow for this customer. There is a timeout on this screen that times out after about five minutes and I suspect I exceeded it. So I'm gonna quickly recreate it. Otherwise it will air. We'll set it up with the same parameters. Now in this window I have a ping going between that server in Singapore and server in Japan. So as we hit deploy, watch this screen and you'll see how quickly we can get this flow enabled. And you can see that within what, about three or four milliseconds we now have, or seconds, sorry. We now have this flow enabled and we're sending traffic through. And it is a live flow. You can see maybe the latency's around 30 milliseconds which is a typical latency between Singapore and Hong Kong at this moment. We also have an IPERF running in the background if it doesn't log me out and it doesn't wanna talk to me today, sorry. And we can see that we have a 300 meg flow now established between these two sides. You can see in the IPERF window it's 300 meg when I put in the right IP address. But now the next use case is, well, I actually need a little bit more bandwidth, right? The 300 meg is getting it done but I'm moving a workload and it's taking too long. So how do I add another 200 meg for the next two hours so I can complete this transfer? Again, traditional world, bunch of paperwork, wait forever and most carriers are not gonna sell you two hours of bandwidth. We can come back into the system, change the bandwidth, hit deploy. And you'll see that it's now at 500 meg. So we can instantaneously add this bandwidth so that the customer can complete their transaction when they need to. And it will automatically take it down after that two hours to the 300 meg flow that they had in the background. Everything we did within the GUI, we're also extending through API directly into scripting so that you can create your own processes to bring up a circuit every night at midnight for two hours to do your backups or through the plugin integrated directly with Chef and Puppet scripts so that you can, when you deploy a VM, it automatically creates the network for you and extends it between the places that you need, the data centers that you have access to on our network are possible. Oh yeah, sorry, as Jim reminded me. You can now see in the field dashboard. The beauty of a live demo. Yeah. You can see in the field dashboard that those nodes have come up. The one that's offline has a bad drive in it and we're struggling with it at the moment so it didn't come up. But the first three that you see in Singapore have now come back online and we can now start to transfer workload to those VMs across that trunk that we just brought up. Well thank you very much. Again, this I think is something as a new concept even to networking and to especially enterprise networking and to private connectivity and all the advantages you get from that. So we're very excited to build this out on an open platform to work with great companies like Mirantis and Velo and be able to actually extend this into OpenStack. So really where we see this is hopefully, really we have the ability for if you're providing an OpenStack cloud, we want to give you this GUI or we want to give you this API and you can turn over this control directly to your customers. You can give them a different network capability that doesn't exist in a lot of the other clouds today. So again, we're going to be around the conference for the next few days. If anyone wants to see a little bit more on another demo or some use cases, grab myself, grab John, grab anyone you see with a pacnet. We'd be happy to demo it, take you through some of kind of how we've built it, what we've done and really want to thank you for your time. I do think I have to stay here because we're raffling off. What are we raffling? Raspberry Pis. Raspberry Pis. I've kind of heard that everybody has one. So maybe it's not that exciting but if you haven't been to Hong Kong before there's this place called the Wanchai Computer Center. You might be able to take this down there, trade it in and get some really cool bootleg stuff. So use it as you will. Do we have Calvin Lee from Lantro Systems? Calvin, going once, going twice. We have someone's card and then they actually wrote their name on it. So Seram or Will from Cloud Don? I don't know if it's the crossout name or the written name. Okay, we're not having much luck with this. James Hughes from Seagate? Yep. This is not gonna... All right, we're gonna change things. You actually for once didn't have to be here to win it. If you called your name, we'll send you an email and if you come actually visit us and wanna ask us a question about this tomorrow you get your Raspberry Pi. And then if we have leftovers and you haven't done it then we'll give you the leftovers. So there's some incentive. Chad Skidmore from Inland Northwest Health Services. All right, there it is. Thanks Chad, appreciate it. Hey, very nice jump there. That was impressive. Obviously you didn't have that many beer. Bryant Chen from Quanta? Bryant? Pernit Wage from Scaler? Okay, what number are we on? Six. Leslie Kim from RightScale? Wow, we're doing great. Andrea Fritoli from HP? Andrea? How many more do we have? We're at eight. Do we have Peter Paul Monas from JTL Solutions? Great, thank you very much, appreciate it. Oh, wait, come on back up. Come on, we got a moment. Thank you very much. And Sebastian Ketchel from Pixel Park? Sebastian, are you here? All right, well again, thanks everyone for attending. Have a great night, have a great rest of the conference and safe travels if you're traveling anywhere back and really appreciate your time and hope to talk to more of you throughout the week. Take care.