 We're two minutes after. We can start. All right. We're going to get started. Thanks, everybody, for coming. I'm not sure how many folks will have streamed in afterward, but we'll get started. I'm Ken. And I'm Fani. I work at Rackspace on the developer experience team at Rackspace. I work at HP on the developer experience team on HP Cloud. We didn't plan that. That's just sort of entertaining. That just happened. So we're going to talk about Node and Pack to Cloud and OpenStack today. So a quick kind of like informal survey. Who's like using Node with OpenStack or using Node in any capacity today? Just kind of curious who's running Node already. And then who's here kind of curious about using Node and hasn't been exposed to Node. And that's a little bit more on the new side for them. More OpenStack savvy, but not so much known. OK, kind of 50-50. So Node's a platform built on Chrome's V8 JavaScript runtime straight off their website. It uses an event-driven, non-blocking IOModel that makes it lightweight and efficient. Again, straight off their website. I'm not going to go too deep into Node's architecture. There with the mic. But Node has some core strengths. Things like asynchronous workloads, coordination, integrating multiple systems, provisioning we highlighted obviously because we do lots of resource provisioning with OpenStack. But the first one, async workloads, that's what I like to talk about a lot. It's the idea that you can use Node to coordinate lots of different things across disparate systems, HTTP systems, grabbing data from a database, provisioning servers, putting static assets into the cloud. These are things that Node is great at. It's really optimized for that type of workload. But it's not optimized for everything. The one I really like to call out most of the time, CPU-intensive workloads, as a function of its event-driven async IO, it's single-threaded. So if you're doing lots of image resizing or really expensive CPU computation, you're going to block everything else that's in that process. And sometimes that's OK if it's a standalone process. But if it's part of a broader app that's doing coordination, you have to be aware of that math. And then lastly, this is not necessarily a weakness, but it's something to be aware of that as a function of it being an asynchronous platform, callbacks, some of these patterns are not necessarily as common in other languages, especially Python, which I don't know anything about. Some of these folks here I do know a little bit. So I just want to go over a sample of what Hello World in Node.js looks like. What you're looking at is basically the simplest web server you can write in Node that's going to return on 1.337 on localhost and just returns Hello World every time you browse to it. So isn't really doing a lot. All it's doing is it's starting up a server and then listening on port 1.337. Now this looks great. Now let's say that you have an application that's actually accepting a bunch of records that are coming in over a web request. And you want to process them. So this is what you would write. And I've written this code myself when I first started out with Node. This looks really nice. And I'm sure it logically fits together really well. And one of the things that Ken was talking about was non-intuitive patterns. While this looks great, because of the single-threaded nature of Node, what's going to happen is if your records.forEach is actually doing a CPU-intensive workload, until this loop is completed, no other request can be served from your web server. So while this looks great and it'll work really well on your dev machine, please do not write code like this and deploy this onto production servers. The best thing about Node is all of these really simple modules that are available. So Node is single-threaded, but our community is multi-multi-threaded. We have so many little concepts and modules that are available for you to write and fix these problems as you see them. One of the simplest ones is called async. And it's one of the most popular modules available in Node today. So instead of using the iterator on the array itself, which is synchronous, we're using the async iterator on top of the records array. And what this will do is it'll hook into system functions such as process.nextech so that the request can be served while you're doing this processing. And at the end, you just return success. You didn't have to re-architect your application or go rewrite core parts of your app. All you had to do was change the iterator. And you still have the same code that you wrote before. And this is easier to read, too. So we think that this is one of Node's greatest strengths. And it has weaknesses, but one of the strengths is its community, which actually is working towards fixing those weaknesses. Well, I added to that that it's not exactly obvious, but a lot of folks use some of the prototype method. That's what this slide was talking about, like the foreach method is a prototype method in JavaScript. And so even though this is an anonymous function that's doing the iterator, it's not an asynchronous anonymous function. That's why people kind of sometimes handicap themselves without even realizing it. And we just wanted to overview a little bit of this just as we get into Node and how we use package cloud with OpenStack, just for familiarity for those folks that may not be as Node familiar, which it sounds like there's a few people here. And so, finally, I mentioned the community. It's important to start kind of giving a little context about package cloud and where it came from. It's a library started by a company called Node Jitsu out of New York. They are a past provider, platforms of service, very Heroku-like, but exclusively for Node. All of their infrastructure tooling is homegrown Node. And they spawned package cloud December 2011. They had a Rackspace cloud files and a Rackspace cloud servers library. And they said, you know, this is great, but it's going to be a lot of work if we want to do multiple providers or they wanted to federate their past deployments to different clouds. They needed a different approach. So they had this idea for support for multiple providers and a generalized interface. And so that's where package cloud came from. The 0.6 timeframe package cloud had compute, storage, and really limited database support. It was predominantly computing storage and it had support for Amazon, Rackspace, Azure, joint, a little bit of open-stack support on the compute side, and then just the three providers for storage. So it was kind of, it had been started down the path towards a broad multi-provider, multi-service strategy, but it hadn't had a lot of community support. And this is one of those instances where the implementation became the standard and they were just trying to fit in different providers. And it doesn't fit that model because everything is so strongly tied to what they were doing at that time with Rackspace. So that's where Rackspace got involved. I was hired in December, I'm sorry, March of 2013. And immediately we wanted to say, what's our strategy for SDKs with Rackspace and OpenStack? And so we had a lot of debate about should we roll our own library? Should we try to commit to an existing library? And we kind of evaluated the pros and cons. And what we came back to was committing to an existing multi-cloud library is a better strategy for us. Not gonna say that's the best strategy for all providers, but that's what we thought was a good strategy. And it exactly, it lined up with strategies we were already doing. So we already have committers for Apache J Clouds, Fog, Apache LibCloud. And so it fit really nicely with work we were already doing on other SDKs to say, hey, let's make an existing library better instead of like roll our own. So continuing onto the thread of freaky coincidences between Ken and I, I was hired at HP on March, 2014. And I was asked to look at the Node.js library that we had for HP Cloud. And it was called HP Cloud.js. It was really useful. It was basically a web-dive pipe so that you can access your Swift storage and mount it as an SMB drive on your machine on Mac or OSX. And it was really useful. So we were looking at what it would take for us to add compute, identity, storage, basically cloud services support to the whole thing instead of just being a tie implementation for this. And while doing estimation of this work, I started looking at libraries that were outside that were doing this. And Apache Cloud stood out not just for the amount of support it had for all of the services, but also the amount of activity that was going on on that repo. And just how open and useful all of this process was. I sent a pull request within two days and then it was accepted. So we got storage, identity, network, compute and a bunch of other services for basically free because we just wrapped what OpenStack had and added custom authentication support on top. So right after this, we lobbed it and got HP Cloud JS is now deprecated. So we deprecated our official client library for Node.js in support of a package cloud and we're going with package cloud for the foreseeable future. And that was a huge validator for the community that at the time it was Node.jitsu committers and myself committing in official capacity. We had lots of community committers. But as soon as we start to see more companies officially contribute, it led to a little bit of viability, credibility to the package, which I think then helps future companies choosing to adopt it. It gives them confidence that this is not just a package that's going to get deprecated or get some cruft and not be maintained over time. And in fact, that led to some great work by Everett Taves and some others on officially rubber stamping package cloud as the Node SDK for OpenStack. Everett and a bunch of others built this developer portal that's available right now. And it's not actually in this order. I think J Clouds is at the top of the list but we wanted to fit it on the slide. So I just used the Chrome to actually go hide that one. If you go in there and don't see this, don't yell at us. It's the second one on the list. So this is- Did we miss the slide? No. No, we did. So this is what package cloud is today. We have a bunch of providers which are Rackspace, HP Helion, OpenStack, and AWS. Rackspace and HP Helion are basically wrappers around the OpenStack stuff with a bunch of customizations for Rackspace service types. And HP Helion has customer authentication support on top of that, but it basically just uses the OpenStack service as it is. And AWS, we had an implementation. So instead what we did was we went and proxy a lot of calls down to the AWS SDK. And that was really easy for us to do because all we did was layer our uniform vocabularies and terms and APIs on top of existing packages. So if you're a cloud service provider who have your own SDK, you don't need to do what we did and deprecated because we made that decision. What you can do is just bring your APIs in and then write a wrapper for that and we can help you do that. It doesn't need to be complete. You can just start a PR for contribution and we'll start working with you on it. And OpenStack and package cloud. So I don't have the code names up here, but basically this is what we think of as services. So computers, Nova, networking, don't test my knowledge right now. That's fine. We all have the services up here. What was that? I was gonna say Neutron, Swift, Trove, Heat, Cinder. I think Glance is really close, but it's not in just yet. Great, you went buzzword bingo, or codename bingo. In package cloud we almost always just deal with the generalized names across the industry. So I often like stumble over my own feet on the code names because they're not in our code anywhere, they're just the general terms. So let's talk a little about client semantics, how you actually create a package cloud client. So basically this is node convention. You start by requiring a package which is importing a file or a model into your code. And we have a dictionary at the back which has provider specific service implementations for all of this stuff. So over here what you see is we say package cloud.compute which is a service that I'm instantiating. We also have networking storage and other stuff that goes on. And when you say create client, you pass in a bunch of options for us. The first one is a provider type and this is what kind of provider do you need in case I'm talking directly to compute on Rackspace or HP Cloud, I'm just gonna use OpenStack. And I can still use HP and Rackspace if I'm passing in custom constructor parameters like access keys, secret token and stuff like that. Use name password, been around since the dawn of time. And RTRL is basically the endpoint for your keystone installation. So in case you're trying to run package cloud against your DevStack installation, just you use OpenStack and then you say use a name password to some DevStack username password and that points to something on your local host somewhere or on your local machine. This is another way of doing the same stuff instead of actually providing the provider. As an option, you can just hard code or strongly type as much as JavaScript allows you to do, to the implementation that's provided by OpenStack. Everything looks the same instead of saying .comput, we just say providers.openStack. So both of them are functionally equivalent. Whenever possible, just try to use the more generalized patterns because you're not really gaining anything by doing this and doing anything more. They're convenience functions. I'm not sure long term they're the right strategy but they're in there and we don't wanna break the API. So let's talk about the computer writer, that's kind of the canonical example when we demonstrate package cloud or talk about package cloud. It's really easy to generalize across providers. Everybody has the concept of a server and instance type size. So package cloud vernacular happens to line up perfectly with OpenStack because that was the first provider that we had. So server is a server, flavor is a flavor and image is an image. I don't have to I think redefine those terms for this audience, they probably know what those are. And so we wanna show off some examples. So the first thing I wanna do is can set some context. I have a bunch of integration tests on my local machine that I use when I'm talking to different clouds. And so I have some helper functionality for doing things like here's some code that handle getting configs so I can run across any number of different OpenStack clouds or HP clouds or Rackspace clouds without having to type in credentials every time. I basically have a dictionary that takes my, a keyword that lets me then run these tests. First one we're gonna start with is create server. It's pretty simple. You have some options in this case, we're gonna do name, flavor, image and SSH key name. And then we're gonna call create server and it'll call back when that server's been created and we'll just dump that info out to the command line. All of this code is available on our GitHub repo. These are the integration tests that we run for package cloud. So node, lib, compute. So the first thing I wanna get an image we have any people that have like a really strong predisposition to a specific OS we've been using ascent a lot lately. Get images. It's fun to say centos, it sounds like centaur. Stack and it's a 10 per grams. We wanna do ID. So this is gonna go out to the I add region for Rackspace cloud using the OpenStack provider and get a bunch of images. I'm gonna grab one of these. Let's just do this Ubuntu one right here. I know that one. So now I can do node, lib, compute, create server, OpenStack just to tell which provider I wanna use for this test. I'm gonna give it a server name, OpenStack Summit. I think it was, which was first, the flavors first. So I'm gonna do our performance one, one, just a one gig box. The key name, sorry, the flavor ID, the key name that I'm gonna use I've already staged on my cloud account. So this key for this laptop is already present. And then I need to give it my credentials in the I add region again. This is gonna go create the server and come back, post and valid key name. Did I do the wrong key name? What do you have there? Ken-MDP, that's your right. That's funny. You can do a get keys. Oh, I know what is wrong, wrong account. Never do live demos. Has anybody heard of the demo gods? There we go. So we've created a server. Go ahead, Bonnie. So what's happening at this time is we've just created the server and we wanna check and see, we wanna wait for this thing to actually start running. So you just created an instance of a VM and it's not running. So what we'll do is we'll go ahead and try to get the server instance again and then see what the status is at. Now, I get that, like you don't wanna do this manually. If you're automating this somewhere, we have methods on our models that'll allow you to wait for a specific property to change on the model itself. So in this case, the status that you can write to say on is so once create server returns, you get a server object back. You just say server.setwait and then say status running. So in that case, you don't need to sit there and then do the manual polling. It will do that for you automatically. And this is a case where again, we talked earlier about coordinating lots of disparate systems. Like the idea that I could spawn, you know, 50 or 100 compute instances and all of those are gonna happen asynchronously and then you wait for them all to come back. And so your app isn't blocking. It's not actually doing any work. It's waiting for those requests to come back. And that's where you can imagine you can do really interesting coordinations where you're not having to have lots of waiting in your app. It'll invoke that callback when we're done. In this case, we know we're still provisioning. Still provisioning. And we just talked about orchestrations and things like that. So the examples that we'll talk about later actually have a bunch of this stuff running in production for us. So, fresh machine. Nice. Hey, there we go. We're on a machine. Can we do the same thing with HP? Right, it's great. So it's really important that when we implant these, we make sure that the exact same code running on different providers behaves exactly the way we expect it. So I can go up here and find our let's get images call. I'm gonna leave it on the open stack provider, but instead of that set of credentials, I'm gonna give it my HP cloud account and it's region-a.go-1. We'll get some images. While region names are kinda hard to actually parse and make any sense, we try to make sure that they're as opaque as possible. Region-a, I don't know what the hell that means. I'm sure it means something. And so again, you saw right there, it's like I just ran the exact same code with just the same provider, just different credentials, and it behaved the way we would want it. Oh. Where's I want to image here? Ubuntu Server 12. I wouldn't get that. Partner image, is that gonna work? Yes. That'll work, okay. So again, I can go back to that create server call, leave it exactly the same. I need to change the credentials at the end to canhp and region-a.go-1. And I think you want to change the flavor ID. The flavor ID needs to be different in the... And we use things that look like integers for our flavor IDs, so it'll be 101. 101, thanks. And you can actually inter-respect all of the flavor types and instance types just like you can via the control panel, like in Horizon, those APIs are available. So if we create the server, it's gonna do exactly the same thing. It's gonna do the HP thing, and it will return the server instance, and you can again sit down and then pull and then wait for this thing to come up. The primary difference between HP and Rackspace when you create servers like this is that we don't automatically assign floating IP addresses to your compute instances. So as a result of that, you can't, once this thing starts running, you can't just SSH into it. But thankfully, there isn't always extensions on top of compute provider that HP Cloud supports where you can add keys to, add floating IPs to an existing running instance. And we have support for that in package cloud. So while the clouds actually have difference in implementation for these services, using package cloud, you can basically write applications that work the same way. So here you see the server came up, it's running, but it doesn't have a public IP. And so we can easily say compute floating IP, assign IP, and it's the open stack provider. It's the server ID first. Yes. And it's the, oh, I didn't get our IPs, I don't know momento. So no lib compute, floating IPs, get IPs, open stack, HP, can HP, yeah, region A, G01. You get good at typing that. Yeah. Better use than me. So this will go out to our account and find all of the floating IPs that we have allocated in this case. We've got debugging setup, logging setup. That's why you can see all of our APIs, API endpoints that we're hitting. And there's a pretty deep capability to do different level of log output. Most of that's for development time when you're trying to get a feel for how your integration works. So here we're gonna go, floating IPs, assign IP, open stack, our server ID, which is right here, our IP, which is right here. Oops, HP, region A, G01. And you can always introspect this and then handle the scenario, even for a rack space if the server comes back and doesn't have a public IP, run the same code again. Thanks to the models that we use, you can actually do this kind of instrumentation yourself and chain them together. Live demos. Live demos. We got one server working, we can get another one. Well, you see that it's actually returning the disclaimer header, so there's a server there, trust us, and it's at that IP address. Oh, that's right, I used the wrong... Used a different image. That's what happens when you pick an image at random and don't test it, but there we go, we're on the box. Okay. Okay, so where are we at in our... So, we have a little bit of time. I'm probably not gonna show code for storage, it behaves very similarly. The idea is container is your container model, file is your object, and you can use that to talk to any number of the clouds. There's a particular strength in Node around, I'm gonna show the code instead of running it around piping. Is anyone here familiar with the piping model in Node and how you can pipe streams from a read stream to a write stream? So, this is a really little contrived example where we create a read stream from a local file, we create a write stream to Swift, and then we just pipe from the source stream to the desk stream, and that's all it takes. I, a couple weeks back, migrated five million blobs from Azure to Rackspace Cloud File, so you know Swift, and it made it trivial because I could allow Node to handle creating all the read streams and using async to rate limit them, so I wasn't creating five million requests concurrently, but then it was just piping, and it was really expedient, it made things really simple from a code standpoint, and it had tremendous bandwidth, it really moved a lot of content quickly. And you're not really loading all the files into memory when you're doing this, you're just passing streams a lot. An example is if you have an application that's handling user images that you're uploading to Swift, you can just take in the request stream and pipe that directly into Swift without having to cache them on the server side. Practical examples, we talked about how we have some running examples of stuff like this, and outside of just our really cool integration test, I wanted to show some applications that we built using Package Cloud, that we're running today in HP Cloud. So, is everyone here familiar with GitHub, I guess? Okay, so I have an amazing repo here that's basically running this application for me. It's called Node and Via, and what it does is at the top it says Paris OpenStack Summit, and it just dumps out the environment variables that are available there. While I was actually working on this, I realized that someone sent me a pull request for Paris OpenStack demo. Someone? Yeah, I sent myself a pull request for Paris OpenStack demo. So, the background for all of this is that we built a CI system that's actually running, that's written in Node.js, and uses Package Cloud in the background. Let me see if I'm still signed in. Yes, I am. It allows me to add my GitHub repos to the CI system, which actually pulls in codes whenever you do a push, and builds out the code, and then pushes it out to a server that you specify for us. And all of this is written in Node, and we use Package Cloud in places, and I'll talk about what we do, what we use it for. So, I'm gonna go ahead and then merge this pull request, saying, good job, not like God, I'm not that good. Okay, so the pull request has merged. So, what's gonna happen is when I come back here and then look at my bills, because GitHub allows you to actually hook in multiple systems, from multiple systems using a concept called webhooks. I wanna keep refreshing this. So, the merge that I just did in GitHub has actually kicked off a build in our system, and this is where Package Cloud comes in, and this is where this becomes relevant to this talk in this audience. What happened is when we got the webhook request, we actually pushed it off to a worker, which is responsible for creating an image of a server that is actually deployed in the customer's tenant or our own tenant, depending on deployment. And we use the Package Cloud compute provider that you just saw to go create a server, and then wait for it to come up, and then assign a floating IP to it, and then use something called CloudInnet to maybe go create a Jenkins instance, or ThoughtWorksGo, or drone instance, and then run the jobs on that. So, at the end of this, or while this is running, this job is actually running on a dynamically created Jenkins instance, and it's doing something called as Helion Push, which is pushing this out to our pass product. So, at the end, I can go into this guy, and then, where is it? Here. So, this went through my CI system, and the change that I just made, the pull request that I just accepted went ahead and updated Paris OpenStack Summit to Vancouver OpenStack Summit, getting ready for the next seven. All of this is written in Node.js, and we were able to use Node.js as multi-piping in all of those amazing libraries to actually make this happen. We honestly built the prototype for this app in three days, me and Terry over there. What is interesting is I spent more time actually skinning the app to look like an HP product, because this was initially written in Bootstrap, Node.js, and that's it, we were done. We were done in three days, and I just had to go back and then rewrite all of this to change the UI to make it look like an HP product. So, this is a real-life example of using package cloud introduction for apps, and this works and it doesn't follow over itself. If you wanna talk about it some more, or have more questions about it, please come see me. There's another example we wanted to talk about. It's, again, it's a little contrived, so bear with me. I mentioned migrating five million blobs from Azure to Rackspace, and as I was working on that code, I said, you know, this is a great opportunity to try playing with some stuff, and I hadn't done CoreOS a lot. I'm sure somebody folks have heard about Docker and CoreOS gaining momentum, and I was a little new to it, and I said, God, I'd really like to be able to do this. Maybe try out doing a cluster, and so I wrote a little app with package cloud that made bootstrapping a functional cluster very painless, so I can do it like a demo of that in real-time, and I'm not exactly evangelizing, you use the tool, but rather how it's representative of how easy it is to start tying little processes together in your infrastructure to create custom tooling, like Faunisho with what he's doing with HP Cloud. So in this case, I'm gonna say, Dash has help, it's just gonna show me what I've got, let me make sure I've sourced my environment, environment.sh. So I'm gonna go, let's say we're gonna do a 10 node cluster, and I'm nodes of 10, we're gonna do key name of Ken MVP, we're gonna do release. I talked about how we create a compute instance for you, honestly, it would be conceptually trivial for us to pick up this work and actually do this inside our product, instead of going out and then creating the compute resource every time, we could go out and create a core as cluster just as easily. Right, and so in this case, it's going out and talking to the etcd discovery service to create a new ID for the cluster, it's programmatically generating the cloud config to do all the etcd coordination, get that set up, then it creates all the servers, and then does the set weight that we talked about earlier, and then when they come up, it'll be a fully functional cluster. If I had fleet services ready to go right here, I could start deploying those to the cluster, and it's again, it's a bit of a representative example of how easy it is to start doing really interesting infrastructure tooling with really accessible, easy to write code that's optimized for async workloads. This usually takes about a minute and a half, but it'll probably fail because every time I've given this in a talk, it's failed. That's representative of my coding ability, probably not the cloud. So we'll come back to that in a sec. So this is all great, but it's not just what we have right now, it's where do we go in the future. So the first thing is obviously, we alluded to this earlier, like the viability aspect is more providers. We don't wanna just have it be open stack, we don't wanna have it be Azure or AWS, we wanna have it be very robust. And so as example, Google, Google reached out to us a couple weeks ago and said, hey, how can we help? And so I actually have a pull request on package cloud right now for Google official support in package cloud, which again, goes back to the viability. The more large companies that are building clouds and small companies that are deploying to clouds and everybody in between helps contribute to this, it becomes more viable. And then with that is broader service support. Right now we have compute, storage, load balancer, DNS, block storage and heat databases, but those are not on every provider. For example, load balancers is only on Rackspace, there is no broader load balance support across providers. In fact, it's not even generalized. It's not always straightforward how we generalize it. So those are examples, like we need more providers, we need more service implementations on the providers we have. And modular architecture just talks about how we would repackage our existing package cloud module. It's monolithic in terms of conceptual kind, not monolithic in terms of its windows. So you have compute, you have storage, you have all of those things already all in the same module. We want to adopt an architecture that's more similar to passport in case anyone's used it. Passports, for those who don't know, passport is middleware authentication, is authentication middleware for Express or Connect.js. And they have concepts of passport local, local authentication module, passport Facebook, passport GitHub. So they have the bare module actually does nothing but call out to existing package module. So we want to do something like that. Maybe we can come up with package cloud compute or package cloud storage. So in case your application or your service only does Swift, then you can just bring down package cloud storage and then wrap that for yourself. It's also a way to kind of manage ever-growing dependencies as we add more and more providers and more and more services. Inevitably, we're going to have dependencies that might come in from first-party libraries like the AWS library. And so being able to slice it into more manageable pieces where you can just pull in what you need, maybe polyfill on top of that for provider-specific functionality that isn't maybe something you want to publish back to the package cloud proper, that's an approach we think that'll make it a little bit easier to kind of consume just what you need for your application. And we want more comprehensive tests. We consider ourselves extremely test-driven. I was telling Ken that we should probably put test-driven at the top because we always start with tests. Missed that? Next time. We have a lot of tests right now that do a lot of marking and we have a few integration tests that can only be run on your machine because you need credentials and stuff to make that work. So in case you're a person who's extremely test-driven and wants to actually start adding tests to existing libraries, please come around and talk to us. We have 632 tests. When I started on the project six months ago, we had 295. And we add... It's not all you though. Not anymore. We keep adding tests as we see fit and then we don't really care if that thing goes up to 700. We'll actually be really happy if that keeps going up every day. So if you want your first contribution to our project to be a test or fix a test or say that this test is wrong, I'm going to go fix it, please go ahead and do it. It's important to note that all of our tests right now are primarily units on some of the methods inside the library and then simulated integration tests where we spin up a real local HTTP server but then we mock the response. So it's still going over the proper HTTP. It's doing all of the real... We're not changing the node prototype for HTTP but we're not actually going over the wire to real cloud. And so one of the things we want to do is take those integration tests that we run from the command line and turn that into a broader suite of tests so that when we introduce new major provider functionality we actually go out to the clouds and then run these tests against live cloud just to make sure that we're not missing anything. And that'll give us more confidence. And then I think last is... I mean, this is not exhaustive because there's a huge issue list and whatnot is turning it a little bit more professional, getting a website, a little bit more, you know, homogenous developer story for how do you get into package cloud? Yeah, we want a website that people can go to and then find marketing terms such as synergize and anything else that's energized, pick some up and then hand wave. Yeah, we want some of that stuff too. So if you're someone who's interested in building a website or wants to build a website for a project, please come talk to us. Which is a great segue into contributing or we need your help. For context, when I got involved there were probably about two dozen committers before I got involved. And for whatever reason, most of the prior contributors had basically disavowed the project and they're not involved. But in the last year, we've seen a huge uptick of contributors and committers and folks being part of that community. We want to accelerate that, especially that's part of our call for help here at the summit is getting more people involved. There's a lot of functionality that I think we could do better at, you know, the re-architecting stuff. And so, you know, getting more people involved is obviously any open sources projects like dream. And talking about contributing and just how much activity happens on the project. Five months ago when I started working on this, I was looking at the first time, I was looking at the first multi-cloud library that I've built. And I started looking at pull requests and issues and that happened five months ago. And now I'm here talking in Paris about this stuff. So we are very open to new people coming in and then contributing. And honestly, if you're someone who's interested in doing this work, then we'd love it if you could help us. We'll give you t-shirts. I'll give you, not this one. We'll make t-shirts and then I'll give you one. We'll make t-shirts and give you one. Once we have a logo, so we also need a logo. We're on IRC, I'm on free note almost every day. It seems like my wife and kids hate that I'm on IRC all the time. I haven't been all this week though, wrong time zone. Yeah, all of the developers on Package Cloud are on IRC, all five of us. And then obviously GitHub, that's our repo. It's Package Cloud, Package Cloud. So here's what you can do. If you're someone who's interested in OpenStack and here's a really short list of what you can do for us. Load balancing as a service is not generalized today. So if you're really passionate about load balancing or about generalizing or about non-specificity of libraries, you can pick this up. Our images, this is the Glantz server. We have support for extensions on top of compute but not necessarily Glantz images themselves and metadata, full identity. You can auth today, basically, you can log in but you can't really rework tokens or actually go create users or delete users. So none of them manage. So great for app developers, not so great for operators. That's an area where we really need to get better. If you're writing a tool that uses Node, that uses Package Cloud to do user management, you can't do that today. In case you are writing a tool, come talk to us. Telemetry, you can't set alarms or you can't receive alerts or any of that stuff today, using Package Cloud. Can't do anything. You basically can't do anything. This is just some of the stuff we've thought about. There's lots of needs out there from the broader project community. Even if the contribution is, hey, I'd really love it if you guys had this thing. Just getting that information would be huge for us. Yeah, and if you're using it, just file issues for us. Even if you have an issue that says, there is no telemetry service, I'd love it. That's great. And then we'll assign it to you and then you can work on it. That's pretty much what we have for you folks. Do we have any questions? One? Yes. You're talking about when this was building. So this is the set weight model or the method that Fani alluded to earlier. It, effectively, in JavaScript, does set interval and then polls on some frequency. We don't have any concept right now of more of a push notification back to the client. We've talked about it. You could imagine if I'd just done this with 100 instances instead of 10 and I'd set the polling frequency to half a second instead of the default, which is, I think, five second polling. And your deployment has rate limiting. You could get yourself in trouble pretty quick. So certainly, we've talked about it. But if there's open-stack knowledgeable people here that have better approaches, I'd love to talk to you after. Any other questions? I think we use vows. No, we use mocha. We use mocha. We use mocha. Are you familiar with mocha? Okay. Does that answer your question? Okay, great. So specifically on these computations, so Rackspace, when we create a compute instance, we automatically assign the defaults for create server R to give it a public and a private address by default. And this gets into kind of the nuances of per provider capabilities. Like how do you tell it to enable the public interface like in the case of HP, they don't. So you have to use the OS floating IP extension, which we have support for. We do have neutron support. Yeah, we do have neutron support. So you can create ports, subnets and networks as you see fit. And while this has actually happened automatically, the provisioning system that I showed earlier for the CI system actually uses a lot of these extensions to add floating IPs as we create instances. So you can do the same thing. It just might not be the same code, but it's the same pattern. Sorry, I never completed the demo. Hey, there's our running fleet, which is pretty cool. Any other questions? Three questions, not too bad. Well, great. Thank you so much for coming. Thanks for coming. Open stack summits.