 Well, we have a lot of content to get through in 30 minutes, so we're going to get started. Thank you all for joining us. We're going to talk about some fairly recent enhancements in the Cloud Foundry routing tier that, as core platform features provide new points of extension, enhance what Cloud Foundry services can be. What kind of workloads can be run on Cloud Foundry? My name is Shannon Cohen. I'm a product manager at Pivotal. I've been working on the Cloud Foundry project for about four years, came from VMware. And this is the Cloud Foundry routing engineering team I have the honor of working with. We're an open source team. You can see we have contributors from Pivotal, IBM, and GE. We've also had members of the team from EMC and other member companies. What does the routing tier do? As an introduction before we get into the features, I'll give a brief summary. Basically, we're responsible for the components at the network edge of Cloud Foundry, making sure that requests for system components and applications running on the platform get to where they're meant to go. This is particular tricky for applications running on Cloud Foundry, since, among other things, as a container scheduler, the location of those application instances is subject to change. Historically, we've been responsible for a layer 7 HTTP router, which is dynamically updated. It also features round-robin load balancing, though we've received PR recently from IBM for an additional load balancing algorithm. Supports SSL termination, web sockets, sticky sessions, and transparent retries. OK, the first topic we'll talk about is route services. As an introduction, I'll tell you a bit about the opportunity we saw. As you know, Cloud Foundry Marketplace provides a way for developers to reuse or consume services, which are maintained by others, so they don't have to reinvent the wheel and don't have to be responsible for all the technology in their stack. But we identified a category of services which wasn't available to developers. And those are things which intermediate application requests, common use cases include limiting authentication and API management. And we found that for operators, delivering these services in a one-off way was a burden. So our solution was to offer these services or enable service providers to offer these kinds of services through the Cloud Foundry Marketplace. So route services are a new kind of marketplace service that enables developers to insert these intermediating services into their application request path. And by not having to reinvent the wheel each time, this increases developer velocity and minimizes time to market. This is a look at the developer user experience, the CLI. It leverages many familiar and existing workflows and introduces one new one. So to discover route services in the Cloud Foundry Marketplace, you would use the same command, CF Marketplace. And to create a service instance, you would use the same command, CF Create Service. The new command that's been introduced to support this class of services is CF Bind Route Service. Because rather than interacting with these services, having an application interact with these services, we're dynamically updating the Cloud Foundry Routing tier to proxy requests to these services. And the API object in Cloud Foundry that was most appropriate to associate with these services was the route. This is the address for one or more applications. So the workflow is to associate the service instance with the route. We also support user-provided service instances so that developers can leverage these kinds of services if they aren't in the marketplace. A look at the management plane hasn't changed. Requests to Cloud Foundry are still sent to the service broker, which takes care of translating requests for provisioning to a service instance, a provisioning of a service instance. The one change here is that the service broker can optionally return the URL of the service instance. If the broker does return the URL of the service instance, the Cloud Foundry Routing tier is dynamically updated, as I mentioned. When it does a lookup of the backends for a particular route, it identifies that there is a URL associated with the service instance with the route and proxies those requests to the service instance. After some transformation occurs, the service instance sends the request back to the router and we forward the request onto the application. Responses from the application travel back through the route service also. So services have the opportunity of doing transformation both on the request and the response. I will say that requests between the route service and the Cloud Foundry Routing tier are encrypted. If the broker does not return a route service URL, there's a use case for services which may be preexisting or be forklifted in front of Cloud Foundry. These are services through which all requests for the platform applications may travel. That might be a transparent pass through, but through the broker integration and exposing the service in the marketplace, there's still value in enabling developers to configure that service for their specific needs with familiar Cloud Foundry workflows. With that, I'd like to invite Prashanta to give a demonstration of route services. Prashanta is from Apigee and Apigee has been an early adopter of this integration. Thank you, Sharan. Okay, hi, everyone. As Sharan said, I work with Apigee. I'm a principal architect with Apigee and I help customers accelerate their digital transformation journey. So let me take a step back and talk about developers, right? So as developers, all that we, sorry. So as developers, all that we would like to do is we write a piece of source code and we want this to run on the Cloud. We do not worry too much about how it actually runs on the Cloud. But there are certain aspects that we do worry about and that is we want what we are running on the Cloud to be easily consumable by other people. And one of the best ways to make this easily consumable is to expose them as APIs. The very fact that these are exposed outside means that you do need to take that little bit of extra care and precaution of these APIs. Apigee has an API management platform called Apigee Edge and we provide you with out-of-the-box features like analytics, traffic management, security, mediation and you can even write your custom extensions. What we focus on is that we want the developer experience to be great, not just for the developers of the APIs, but also the consumers of these APIs, right? So in partnership with Cloud Foundry, we have built the Apigee Edge Service Broker. The Apigee Edge Service Broker is available as a route service on the Cloud Foundry platform. This can be used by developers and operators to easily plug in the API management features on top of your existing applications that are running on Cloud Foundry. Shannon spoke to you about the developer experience and just going to go and show this to you in a live demo. Shannon spoke about how the request flow quickly in the context of Apigee, what happens is the request comes in to Cloud Foundry, you still make the request to Cloud Foundry, the go router is able to determine that there is a service instance associated with this route. The service is proxied to the Apigee layer where you can perform your API management tasks. Once the API management tasks are done, Apigee sends this back to the go router. The second time around, the go router knows that it has to be forwarded to the application that's running on Cloud Foundry, application returns a response and the response follows a similar path back out. Okay, so let's get into the demo. All right, so I will use the command line interface of Cloud Foundry, oops, sorry, that helps. Okay, I already have an application running on Cloud Foundry. Let me show you what this does. It returns a simple JSON response. What I can do now is I can look at the marketplace for available services that I can use. So I am looking at the marketplace. I see that there is a service available called Apigee Edge. I can look specifically at the servers. I see that there is a service plan associated with it and it is currently available for free. What I can do now is I can go and create an instance of the service on my environment. Before that, let me quickly jump to the Apigee platform. So this is the Apigee API management platform. Right now there aren't any API proxies available here and this is where you will do all your API management tasks jumping back to the CLI. I am going to create an instance of the service. I specify the name of the service, Apigee Edge, the service plan, I give the name of the service instance that I want to create and I'm also passing in some parameters via a configuration file. This validates the right credentials are present so that we can start using Apigee API management. The next thing that I will do is now that I have a service instance created, I need to bind this to my route. So I will do this by the bind route service parameter. I specify the domain and the route to which I want to bind, as well as the service instance that I just created. What this will do is this is going to now create a proxy on Apigee. So when I reference the screen, you will see that a new proxy is created. Apigee has an easy to use trace tool by which I can see the request flowing into the system and the response back out. Let me make a curl again to this URL. You see that I'm still hitting Cloud Foundry. I don't need to connect to Apigee because I've done this. Apigee sits as a service on the route. The response has gone through Apigee this time. Right? Currently, the API proxy is empty. There isn't much happening in here. So to show you what you can do with API management, I'm going to add a policy. Now Apigee supports a lot of policies like traffic management, security, mediation, and as I said, even your custom extensions. I am going to add a spikers policy to the route. I'm going to keep threshold low enough, which is about 10 requests per minute. Save this and start a trace again. Let's make a few more calls to this application. We get a response, we get a response, and we start hitting the spikers policy. So you see here that API management is in effect and when I go back into the trace, you see in the first case, the request has come into Apigee from the go router. As what Shannon explained, API management activities are done on the Apigee layer. The request is sent back to the go router, in which case this time it's sent to the application that's running on Cloud Foundry, response comes back out. In case of a spikerist violation, you must excuse the UI here. There are some things that are not being loaded. What you will see here is that the response is not sent back to the application this time because it is a violation that happens on the API management layer. The response is immediately sent back to the client, of course, through the go router. So this is how easy it is to use, to add API management features using the route services to your existing applications. Jumping back into the presentation. In summary, what you saw is that we create a service instance using the CLI. You bind the service instance to your existing route and hopefully I also got you interested a little bit into how easy it is to do API management using Apigee. It's okay, yeah. So how has been the developer experience developing the service broker itself? Cloud Foundry exposes a set of APIs that you would need to implement in order to create your own services and service broker. APIs are meant to be easy to use. However, it's very easy to make them hard as well. If you do not have the right documentation, the right security and do not use the right STP methods. And in this regard, Cloud Foundry has done a great job at exposing really easy to use APIs. It also helps that there is a good amount of testing and mock frameworks available. Also, there is a local development and management available where you can do your development and testing locally without having to push the cloud each time. Yeah. We can share the slides, but this is a great way. Don't refer to this documentation, but then this is a nice blog on how to make your APIs hard to use, right? It's an anti-pattern. Don't do that. Of course, we're going to talk a lot more about APM management, a lot more features that you can do with Apigee in the session tomorrow at 2.30 p.m. Of course, all this is using the route services provided by Cloud Foundry. For more information to build your own services or to read about this, you can refer to the documentation docs at Cloud Foundry slash services. Okay, that brings me to the end of my part. I'll do about to you guys. Thank you. Thank you, Pratant. Okay, we're going to switch gears and talk about TCP routing. So first, what was the opportunity? Traditionally, Cloud Foundry has been a great place to operate and develop HTTP applications, but there's a world of applications out there that depend on non-HDP protocols. And it wouldn't be great if developers of those applications could run their applications on Cloud Foundry also, getting the same high developer velocity and minimizing time to market. So our solution is support for TCP routing. These are support for applications running on Cloud Foundry that require non-HDP TCP protocols and supports many use cases, including the Internet of Things category of applications we're hearing so much about. We also believe that to satisfy as a use case, wherein requests to applications need to be terminated as close to the application as possible. Here's a look at the user experience. Management of TCP routing is much the same as for HTTP routing, with the exception that now there's a different kind of domain. When you're looking at the list of domains, you can create a route for. You'll see that there's a domain of type TCP, and when you push your application or create a route, you would use this domain. TCP routes are associated with ports, whereas HTTP routes are defined by hosts and paths. And for each TCP route, a port is reserved. So you can either request a port or ask the platform to allocate one for you. In the bottom command, you can see that I'm pushing an application and have specified a TCP domain and said that I've asked the platform to give me a random route, and the platform generated a port for me and created a route from it. A little look at the architecture of this, we've introduced some new components that we expect will provide eventually a point of extension. There is already a emitter which supports HTTP routing. The function of the emitter is to watch for events on Diego. This is how we update the routing table when instances of applications are moved. And the emitter sends those events to the routing API. The routing API is intended to eventually replace NATs as the source of persistence of the routing table. And the routers are watching for changes in the routing table from the routing API much as the go router currently does on NATs. Just type that emitter is a separate thing and start a meeting which way it's in traffic. So the emitter is a purpose built for Diego, but you could use a client of your own and send your route registrations directly to the routing API and register TCP routes. That's right. It works with the same heart beating mechanism. So I mentioned how TCP routes were based on ports. There's a couple of layers of port readdressing that goes on. You can see in this example, the client is sending a request to a domain and port. The domain is resolved to the load balancer and the load balancer forwards the request to the routers. At the routing, in the routing table, the router maps the route port to a backend port. And that backend port here is an internal port signed by Diego. And the application is listening on another port, currently 8080. Okay, with that, I'd like to invite, go ahead, please, Nick. Wait patiently, so constraints, obviously there's a big number of ports on a machine and I heard on the ELBs, the Amazon ones, there was a very small range of ports Yes, unfortunately, Amazon does have a limited range of ports. You can open per ELB. So how do we manage? But tell me, Cloud Family, do you want to get this made? So we do expect that if there's a great demand for ports, that port capacity could be an issue and we're looking for that feedback. We have some ideas about how to scale port capacity, including support for what we call router groups, which would be a cluster of identically configured routers and you could potentially deploy multiple router groups and multiple load balancers. Currently, the routing API supports one, but based on feedback, we have good ideas about how this could scale. Another way would be to support shared ports using SNI. You could, but we have a little bit of work to do in the routing API to enable you to support different port ranges for those load balancers for the same router group. I'd be happy to. Yeah, I would expect that you'd map the same port from the load balancer through the router. For the particulars of router groups and scaling ports, we should follow up after just so we have a chance to give the TCP router the demo. Okay, Chris is our lead engineer on the routing team. He's going to give a demo of TCP routing. All right, thank you, Shannon. As Shannon said, I've been on the routing team for about a year now, helped develop and implement the TCP routing and happy to show it off here to you all. So quick overview of the demo before we actually get into it. We will be using MQTT protocol, which is a pretty common non-HTTP protocol for IoT use cases. In this particular demo, we'll use my handy-dandy smartphone to publish on a topic to the MQTT broker on Running on Cloud Foundry. And then we have another web app that is subscribed to that topic and will visualize the data I am sending. So if we go over here, make it big. How's that? Looks good. All right, so the first thing we want to do is log in. So this workflow that I'm showing is the basic developer use case of creating an app, creating a TCP route, and kind of doing that whole workflow there. So first thing we need to do, find our domains. We see we have two domains, shared domains. One TCP is a type TCP, so we can create TCP routes from that. An important thing to note with TCP routes is that as we have limited ports, we have to divide them up. And, ooh, interesting. So you have a quota that is, you cannot create TCP routes if you don't have certain quota set, so be aware of that. So let's push our MQTT broker. We are using a Docker image, which is pretty cool. Forgive my mistyping. Here we specify our TCP domain, Superman. And since we don't care what port we get, we're just gonna request a random route and the clock controller will basically give us one that's free. And so this is just going through the normal staging, pushing, starting lifecycle. We see here that we're binding port 60,028 to our app. And it started, we'll see, we get a URL with the port there. If we take a look at CF routes, we also have that output there in this field port field, 60,028. We look at apps, I have pre-pushed our web app, MQTT web. So if we go over there, have it, we load it there. So basically what we're gonna do in here, we put our TCP domain, this is our port. How do you forget what port it is? There, get my web app ready, connect. Now I do the same thing on my phone, I wish I could show it to you. I don't need to volunteer for the audience. You know, if you wanna download it real quick. I believe it's on Google Play. So I'm connected, ooh, interesting, interesting. Now that is unfortunate, live demos are going wrong. I'm actually not on the Wi-Fi, because I tested it through the Wi-Fi and it didn't work. Mm-hmm, all right, we'll try one more time. I don't know if it's the Wi-Fi that's the problem here. No, so this is, to be clear, this is not a service, this is just a route. It's the same way as HTTP routing. This is, as you pick your host name, instead of a host name, you have a port. So that's how we distinguish between the two. I do have a video. I don't know if we have time to pull it up, but, huh? All right, let's see. Oh, great, sorry. I thought he was gonna put his laptop on. No, find it. Hey, so we skipped through that part. Oh, whoops, okay, let me, how do I get this going here? That's my crescent, oh, there we go. So let's take a look at this. Ooh, we'll skip back. That was the payoff right there. So we're doing the same thing here, and now we are going, I'm connecting on my phone, and suddenly we all see, oh, gotta look at the port, never remember the port. So there we go, now this is me kind of frantically waving my arm around to generate graphs. This particular thing is looking at the Y acceleration of the smartphone, so those big spikes are me waving really hard. And so again, this is all, yes. So that was all very quick and easy, but this is now using a TCP protocol on Cloud Foundry, specifically MQTT, but it doesn't have to be MQTT. You can have pretty much anything that runs on top of TCP, XMPP, DBS, all that good stuff. How do I replay this thing? You could indeed. I have actually talked to someone who had that idea and wanted to try it out, but if you want more information about TCP routing or new routing features in general, we have our routing release here at github.com on the incubator site. Yeah, there's a demo. So we're out of time, but I think I have one more slide just to give you an idea of what the routing team is working on. We've recently added support for Zipkin tracing. The router will now optionally initiate Zipkin traces. It supports the B3 headers standard. If your applications use a Zipkin library, you can propagate those tracing IDs. We also added a little feature enabling you to send a request to a specific app instance using a particular HTTP header that takes as a value data that's available from the CFCLI. We're also currently working on performance benchmarking and improving performance. We've had some great talks from folks in the community who are doing some of the same analysis and really appreciate the help. And the next big rock that we're gonna be working on is support for multiple app ports. This would enable an application to listen on more than one port and fulfills use cases like applications serving web traffic on one port and app instance specific data on another. We're also looking into support for CERT management. We hear that as a major pain point in looking for your feedback. We're also interested in getting feedback about weighted routing which would potentially enable a developer to specify that they want 10% of their traffic for a route going to app A, 90% going to app B. And we've also believed that the routing API could eventually support a bring your own router workflow. If you would like to, even in the way out future, choose from a selection of routers in the marketplace. I would like an F5 virtual appliance or Zool and have that used for your application only. We see a path towards that and we're looking for your feedback on that. If you have feedback for us, come see us after the talk. We're getting in touch with us with Slack. Here's my email address. Feel free to get in touch with me and here's our links for documentation on the presentations you saw today. Thank you.