 Hello, everyone. I have a question for you. What's your problem with middleware? Now, I don't actually have the answer to exactly what your problems are, but I can tell you about the problems that we had with middleware at Rackspace. Rackspace is one of the older companies that's running OpenStack because we originally made it. But at the same time, we worked with legacy systems and had to integrate them into all of our processes. We had to write a lot of custom code to integrate OpenStack with existing systems. And each one of those services spoke its own language. And we duplicated a significant amount of effort across all of those teams. The problem was all in the middleware, the glue between services that translate between what the clients send and what the services actually expect. This is the current diagram that we have for what's going on in OpenStack right now, which obviously doesn't include any of the other services that you use to run your business, any of your billing systems, any of your client information systems that extend beyond identity. But we're also forgetting a few things because we've also got heat. Oh, that makes it a bit more complicated. And then we also have Horizon. You realize that everything is talking to everything else at the same time. And it's all using Jeevan. And it's all using these systems which aren't designed to scale independently of the systems that they're actually based on. And they're all focused on the individual nodes. And not the cluster overall. But what if you could do more with middleware? In a perfect world, middleware would help you integrate new services into existing infrastructure. It would secure services against abuse at scale and enable full transparency to how your system is running at this very moment so you could track down errors and find logs incredibly easily. This is why we made Repose. Repose is an HTTP middleware proxy that sends requests, passes them through a series of filters. And at the end, you're left with the requests that the services want to see. And all other requests are just thrown away or sent back to the client with error messages. Just in the same way that if you want a good cup of coffee, you've got to take those unrefined grounds, put them through the filters. And the only thing that should come out at the bottom is a nice hot cup of coffee. This is how we have the architecture set up for Repose. We run it in front of most of our API services behind the load balancer, but in front of the service. But it's important to note that Repose can talk to itself among the clusters. One of the first filters that we have is HTTP logging. And this provides a very easy location for you to send all of your logs into one location, such as log stash, so that you can see everything in your infrastructure without having to coordinate across multiple teams and say, oh, hang on, can I get your logs? Or, hey, can you give me the password to that other log stash instance that we have? How many times have you actually wanted to go across to the load balancers team when you've been on the NOVA team and figure out exactly what happened to your request or where it went? With this, we're able to port it all into a single location. It acts like a simple filter, going straight to log stash. We do normalization on all of the content that comes in. So we can modify filters in the request. We can modify into the content body. And while this seems like a very typical thing for a reverse proxy to be able to do, what it enables us in our infrastructure is that when versions change, when contracts change, we're able to maintain our clients sending the old information. And we just translate it into the new format. We're able to reach out to our users and say, OK, hold on, maybe you should be upgrading your services. But in the meantime, we can upgrade our services, but keep you running. We implement authentication much in the same way that Keystone authentication works. We use caching. But the key benefit to using authentication within repose is that you don't have to set up memcache. It comes by default. It's already built in with a distributed data store that I'll talk about a little bit later. We properly implement distributed rate limiting. Distributed rate limiting is something that we've been wanting in OpenStack for a very long time. But unfortunately, it hasn't been something that we can put into every single service. So rather than going to a middleware library or trying to find something in Oslo, this just does it by default out of the system because it's true middleware. Again, all of the distributed data servers will talk to each other so that across a nova cluster, you don't have to worry about how many requests are coming in. And also, at the same time, if you get 1,000 simultaneous requests and your rate limit is set to 1,000, the 1,000 at first request will not get through. Next, we do proper role-based access control. First, in order to implement role-based access control, you define how your API functions. And then you determine roles there. So you say the admin role. And that means that any user that doesn't have an admin role within their authentication catalog cannot access this endpoint and will be returned with unauthorized. But that's pretty cool. We solved that use case that we were already talking about up into some of the design sessions. But now we've got a full map of our API. We can do a lot more things with that, such as API mapping. When we are able to fully map exactly how our API operates, we can come up with a state machine and allows us to create a map like this, which will fully catalog exactly all the different directions that users can go within the API. And because this can be integrated into your documentation, you can actually code against your documentation. You can see usage patterns that happen through. You can also see if your service is actually accepting requests that don't fit towards this documentation, allowing you to be a lot more consistent in how you think it works and how it actually works. A bit of a zoom in. You can see some of the more detailed paths. All of this is backed by our distributed data store, which, with very little configuration, knows all the other nodes that are operating within the cluster to allow it to behave like a true cluster rather than individual nodes as things currently feel when you're implementing modWizgi. Or in another sense, you don't have to have a single Apache node that then everything goes through and becomes a single point of failure. And once it passes through all those filters, you're left with nice, hot, steaming requests that are then passed down to your downstream service so that you don't have to build any of that logic into your service whatsoever. So if you have any more detailed computations that you want to run that don't necessarily belong within your business logic, throw it into the middleware. But we can't make these quality filters without having a quality community. We created this to help Rackspace run better, but now we're looking for a bigger community so that we can create even more filters so that we can extend this vision so it's not just solving our use cases, but it's also solving your use cases. We feel that this technology is really powerful, and we feel that constraining middleware into the box of your application, into the box of your service means that it can't be replicated across systems that speak different languages, across systems that have different code bases. This is only the start. We're looking to do even more with Repose. First of all, we're looking to build easier integration so that right now we only handle Keystone middleware. But why? And we only handle Keystone middleware where the user is directly handing us the token. Well, why can't the user directly handle their API key and username over to the service, and then we'll just handle all of that authentication and talking to Keystone? We're already doing it. At the same time, what if you want to open up services to use OAuth, or if you want to use JSON WebToken, or speak SAML? All of these credentials should be easily handleable in the middleware, but aren't. But we're missing out, because we could make it that easy to talk to any authentication back end that you're looking for. We had a very interesting design meeting recently on tracing. And while implementing tracing is an extraordinarily difficult task, putting it in the middleware is a very logical place. Because middleware is the glue that ties everything in together, why wouldn't you put it there first, replicate it across all of your systems in one go, and then do the very detailed instrumentation within your application logic to figure out exactly why that database cold took so long? Especially with services like autoscale and heat, you're going to be doing a lot more distributed calls and orchestration, and you want to figure out exactly what is making that call take so long, where it stopped. Did it stop in load balancers? Did it take a lot of time in NOVA? Or maybe Keystone authentication is responding with a rung of quest. Without tracing, you don't know. When you have tracing, you can figure out exactly what is taking that long time. Or even better, you can visualize it by network connections. And next, we're looking for ubiquitous usage. We feel that if you're going to have technologies like this, you can put it in front of any API that you have, and not necessarily an OpenStack API, to make things consistent and to make sure that the services don't have to be written in order to integrate with new layers as you upgrade your systems. But that's not to say it's all hunky-dory. We know that Repose right now isn't super popular, because we're the only ones using it at Rackspace. And unfortunately, there are some reasons for that. It's Java. But there's a reason that it's Java. I know I kept that kind of secret for a while. I think I had you going there. The reason we chose it is because, first and all, we believe that middleware doesn't need to be baked into the application layer. Because if you do bake into the application layer, you run the same risks of when you use a different system or when you want to integrate other APIs, you have to rewrite the whole thing. So you want something that's very compact, modular, easy to deploy, and test. And if you talk to your operators, they'll tell you that there's nothing easier to deploy than the JVM. At the same time, the concurrency model that Java offers us allows us an incredible amount of performance with really low latency. And even some of our competitors are using Java. If you look at one of the other competitors to repose is a Netflix Java application, well, written in Scala. We made it incredibly easy to deploy repose, having both ChefCookbooks puppet modules, Docker files, RPMs, Debian's, and also you can just deploy the native JAR, or you can put it in a war file so they throw it into Glassfish or Tomcat. The performance that we're able to achieve with Java is incredible. This is a production DFW identity keystone that we're running at Rackspace. This is from yesterday. You can see we're hovering around 60 milliseconds. Well, you can't really see that. But you actually can't see the effect that repose is having, which is down there in the bottom. So if we zoom in, you'll see that this is around 42,000 requests a second, and we're adding less than one millisecond of latency to those requests. And this is running most of the APIs at Rackspace. Additionally, we made it incredibly easy for you to test all of the repose proxy by using deproxy, which I think has quite possibly one of the cutest logos we've ever invented of an octopus talking to itself. But an octopus talking to itself is actually a very good analogy for what it does, because it wraps the proxy so that you can orchestrate requests, come in, see how they transform, and come out the bottom. So you can have end-to-end integration tests that wrap around the proxy. So at this point, we have a question. Why is your middleware platform specific? Why do you deeply integrate it into your systems when you want to be using the same middleware everywhere and simplify it? Because one of the core tenants of code is to write it once, use it everywhere, right? And at the same time, we know that you shouldn't have to worry about coffee grinds messing up your day, because it'll give you more freedom in order to be able to write the code just once and deploy it. The question is next, what will your developers do with all that free time? Because they're not rewriting these services or rewriting their own rate limiter, they're able to develop more services and write more integrations and to have deeper integrations into more complicated systems. I can't give you an example of exactly how much time it saved us at Rackspace, but I do know that when I was the product manager on the autoscale team, we saved an incredible amount of time by being able to deploy this in a week rather than having to spend three months writing our own distributed rate limiter, which was extraordinarily necessary for autoscale. And that's what Repose is designed to do, enable true service-oriented architectures. But we can't do it without you. We need you to contribute. We need you to let us know what the problems you're having with your middleware. We need you to tell us what other services you want to integrate, or exactly how complicated your infrastructure is when you're trying to use OpenStack and pull it into your existing infrastructure. OpenStack is the key to opening clouds. And Repose wants to be the glue that holds it together. And it may not actually make you coffee in the morning. But if you do make an API that makes you coffee in the morning, we'd really love to manage it. All right, that's Repose. My name's Felix Sargent. Thank you. Anyone have any questions? There's a mic right there in the middle. So you said that there's a distributed data store? What's in the data store? What kind of data does your middleware need? Or hey, you want to answer that one? OK, so it's data used by the filters themselves in the middleware, such as cache tokens and rate limits and such? OK. Yeah. Sorry, I thought you were asking about the data model, and I was thinking you didn't know the answer to that one. Hi. Oh, my question. Does it handle SSL? Does it handle SSL? Yes, it handles SSL because it uses JETI. Right now, we'd use most of the SSL termination at the load balancer, but it can handle SSL just fine. So it will terminate the client SSL nation, and then it will be started from the other side? Yes, you can do that. Can I have two questions? One about the transformations you're saying that you can do between what does it use, how do you load up something to map one API version to another? So I believe you can use XSTs and regXs in order to be able to perform those translations. XSLTs. But you can also write a filter in code to do whatever other term. And for the data store, if you have something like where you have different keys between what you're getting in and what you need to give to the other side, is there any way to do a lookup and do replacement in the body to? The data store algorithm that can be backed by various things, there's a built-in one that comes built-in for you to back it with other stuff. Although we really haven't had much experience in it. We really haven't experimented with backing it, but it is pluggable. And because it's a hash table algorithm, you have a UV key that gets the data that you want. You retrieve the data, and you can attach the data, and you can delete it. And what kind of authentication engines do you patch now? We just used Keystone. Just for everyone's information, that is Jorge Williams, who's the lead architect for Repose. Are there any more questions? OK. Thank you very much.