 Hi, I'm Ed Anna from Apogee, and I remember to introduce myself. I apparently had forgotten to do that on Fridays, excuse me, on the Monday lightning talk. So we've got a bunch of folks up here, and there's a reason for that, because what we're going to be talking about is the new route services capability in Cloud Foundry, and that's been a collaborative effort. It's involved a lot of the work that Pivotal has done on it over the last year, and Apogee has been privileged to help collaborate on that, bring a lot of API management use cases to the table, and do the first implementation of API management integrated into Cloud Foundry by way of route services. So when we planned this session, we wanted to really give you kind of an overview of what APIs and API management within Cloud Foundry is all about, and show you how it works, and then have some real-world use cases based on what the team from GE Digital has done with it. So as a consequence, we've got a content-packed session here that we're going to try to race through. So, you know, most of you know what an API is. It is a contract. It's how software talks to other software. We all sort of know aspirationally what, why we use APIs. They should be straight, simple, straightforward to use. You know, specifically in the context of this, we're talking about web APIs. Interesting thing, statistic that I saw was that over half of developers today are actually producing web APIs that other developers are integrating. So web APIs are probably something that most of you in this room are actually, are actively coding software for. In terms of what goes into modern APIs, you know, you can debate this, you know, day in and day out. Over at Apogee, we do that quite a bit. Most of our customers are planning APIs, API initiatives, programs in some form, designing their APIs. You know, beyond the basics, most people today tend to agree that they should be JSON-based. If you're still building XML APIs in 2016, you're probably, well, you probably have a good reason for it, but I don't know what that is. It's probably because somebody made you do it. You know, generally people are sort of converging around OAuth and so forth around that. What's the challenge? So a lot of what we talk about here at the conference is about scaling, you know, scaling the execution of your software, of your workloads and so on. But there's another challenge in scaling, which is scaling the adoption of your software, being able to connect it to your users in the various locations that they might be operating from, perhaps from mobile applications running on their phones or it might be the businesses that your company partners with where you have to interchange data and so forth. So, you know, as you're probably, you know, dealing with as you look to solve that, APIs are the mechanism by which you do that. And so this is all a, you know, sort of a setup for the fact that API development is probably a big part of what you're building within Cloud Foundry. And so when we started talking to Pivotal about this really about 18 months ago, you know, we said there needs to be some way of deeply embedding the API management capabilities that make this all possible in an elegant way into Cloud Foundry that is part of your application life cycle. And route services emerged out of those conversations as a way to do that seamless integration. So with that in mind, let me turn things over to Richard. Thanks, Ed. I'm Richard Sarota. I work at Pivotal. I'm a recovering vice president from somewhere else. I want to talk to you today a little bit about route services and the importance of those. And what we've built around those specifically to help you actually inject something into the route path. Now, I'll make a quick confession. I occasionally drive to work to Seattle and you zone out for a few minutes, right? You're listening to music, you're listening to a podcast. I don't know, I could have run over a duck, missed a tornado. Like you're completely in the zone and you get to work and you realize you, you know, something just happened because the route kind of gets boring over time. What I found interesting with route services is there's a lot that goes on in a route and you take that stuff for granted. You know, a lot's going on. And what we do with route services is actually making that commute a little more interesting. So with route services, actually making it possible to inject something into that request path. So it's not just traffic coming into the system. It's being able to do some things with it along the way. We'll talk about some examples and obviously see a really nice demonstration of that. But this also gives a pretty cool opportunity for marketplace services. Not just things you may buy from Apigee or others, but even doing user-defined things. Make up your own service. Be able to introduce that within your organization and add it as a broker so that anybody else can consume your particular service and we'll talk about some examples. The whole point of all of this is developers need to be going faster. I don't want to open up a ticket and beg the poor API management administrator to add a policy for me and deploy my service. That is not making me go faster. That's making me frustrated. So instead, the whole idea should be how am I helping developers self-service their way into more realistic, high-functioning applications? If I'm not meeting that goal, then we're not doing the right things. So we take that step back and we talk about a route. And again, route can be very unsexy, you know, it just kind of magically happens. But a lot of stuff happens on the route. I mean, this is how do things come in and go to my application? And in these sort of very dynamic container-driven solutions, that is not trivial. Containers are coming and going. I'm scaling instantaneously. And then traffic is magically finding its way to the right place. That's awesome stuff. There's a lot of smarts that happen underneath the covers. And that's why you need these sort of dynamic capability, is because routes are changing constantly, because what's underneath the covers changes. So how do I point to the right things? That matters a lot. And developers have some control. You can use your CF commands and create routes and bind routes to a service and do wild card routes. You can do a number of pretty cool things that, frankly, I didn't even realize until a few weeks ago. So a lot of really interesting things you can do with a particular route. The key, though, is that how do I do something even more interesting with that? So when you look at route services, three kind of core use cases we've thought about, is as we think about performance and reliability. How am I doing things around caching? How can I inject caching into a route and not have things go all the way down to the service? Be able to pull some things back out, pull some things from my cache, increase the performance of my app, even help reliability if downstream I'm getting pummeled, my cache can get pummeled instead. Same with as you think about processing data. Imagine adding a route service that throws everything in Amazon Kinesis because I'm doing some streaming analytics of the request that come in. Be able to add some things for more dynamic tracing. Add a route for a few minutes, a route service, because you're trying to instrument a service or trace it, and then take it back off. I didn't even have to touch my service. So you're doing some of these things, these sort of cross-cutting capabilities across your services without actually touching the service itself, which is pretty great. And same with things like security. When I'm using something like API management, how am I putting authorization against some of these services without asking my developer to keep doing their own authorization scheme? I have not met many developers, including myself, who are good at doing authorization or authentication. I should be outsourcing that to some sort of cross-cutting aspect-oriented thing in my system. That's where API management is really helpful. If we look at the experience and the operator experience, really pretty straightforward. Give me a service, add something as a service broker, be able to add that, create the service plans, and as we think about things like API management again, I could have a plan that says, here's a plan for authorization. Here's a plan for caching. Here's a plan for something else. And as a developer, I'm just binding my service to that particular route and things are magically then going to happen for everything in that request path. The developer, obviously, I mean, this is what really gets to be exciting stuff, is developer, you know, once the operator sets this up, developer's good to go. CF marketplace, there's all my services, whether it's MySQL or Rabbit or Apigee, these are all going to be route services I can consume. And then I'm just doing create user-provided service, passing in the service I want to use, what's the route I'm going to refer to, and then bind it. With that, I'm done and everything's instantaneously going to hook up. There's no additional work I have to do. And again, I'm doing this in a couple of different models. I can do things very straightforward in this kind of simple model where I'm just going to put an appliance even. I could stick a virtual appliance in front of Cloud Foundry and in this case, everything goes through. And you may want that, where every bit of traffic goes through that particular route service. And, you know, things that don't have anything to do, that's fine, it just flows through. Users, customers, developers can still bind their routes to part of this, but gives a very straightforward solution. Or you can be more dynamic, where traffic is inspected once it comes into the Cloud Controller. And if it matches the route, it goes back through the route service, does whatever that route service does and comes back through. That can be great if you don't want all traffic to go through. You have a number of different options and what's really great too is that you can do user-defined versions of this. You do not have to buy commercial software, however awesome it is. I can do whatever I'd like here. And hook this up to my in-house system. I can hook it up to my in-house instrumentation or my preferred caching tool or whatever. That's really great that I don't have to necessarily wait to see what someone ships. I can build it myself. Sure you're talking to the right one. Exactly. Yeah, I don't think this will pick up audio. So we've talked about API management a little bit in terms of what we think one of the great uses for route services is. So what is API management? Well, when you look at the challenge of going and making APIs available, there's a whole set of what we call non-functional requirements that are part of APIs. A lot of people go and say, you know, API management. I know how to build an API. I understand restful API design and so on. You know, what exactly is API management? Why do I need it? And, you know, it really comes down to, you know, three basic things. And the first is being able to have consistent security across all your APIs. And, you know, we see this quite a bit at Apigee where sort of basically inconsistent authentication strategies and so on across APIs. It's always, you know, it doesn't matter that, you know, 90% of your APIs are secure. It's that one other API that somebody spun up, even for something completely benign, like maybe even just doing autocomplete or something that suddenly people find us an attack vector. It's having the analytics visibility. And this is from both operational analytics in terms of how your APIs are performing, as well as oftentimes, as Richard was talking about, extracting a whole bunch of data out of the API stream for doing business analytics. And then finally, it's enabling your ecosystem of developers, being able to have, you know, every developer able to go and publish their APIs into a standard location that other developers can go to to find out information, documentation for the APIs, and being able to come and learn how to use these APIs in, you know, with interactive documentation that makes it very easy for you to try out API calls and so on. And so this is a lot of, you know, what goes into having successful API strategy. Quick aside for, you know, Apigee, we've done this for a lot of companies and many of the folks in the audience here are from organizations that are using Apigee for doing this stuff. So what the integration with Cloud Foundry looks like really is sort of the textbook example of how route services can work. And basically what we do is, you know, we expose Apigee's API management platform as a tile within the Cloud Foundry marketplace. If you're not using Pivotal Cloud Foundry, you can use, you can do this via manual install and there's instructions on GitHub how to do it. But once you do that, it's going to bind a route that causes your inbound API traffic to go into Apigee where we're able to do things like apply policies for security purposes or data transformation purposes. You know, I said earlier it would be great. Everybody should be building their APIs using, you know, JSON and so on. But there's a lot of legacy APIs and a lot of legacy application clients that were designed to communicate in older protocols and so on. Part of what API management lets you also do is transform those requests from perhaps the way they were originally designed to the way you want to support them now. And the nice thing about this is it's all really designed to be natively integrated into your workflow. And so what you're going to see in a minute is Carlos is going to step up and show you a demonstration where you're able to go and see that this is really seamless. The goal of this is to make sure that every app that you build, every API you expose, basically has all of these configuration options preset up, that it's not one of these things where some developer goes and says, oh, you know, I forgot to set up the authentication mechanism. Oops. You know, you don't want that to be happening. So that in mind. Here you go, Carlos. All right, thanks. Can we get the... There we go. All right, awesome. I only have two hands, so I can't hold the mic and do this at the same time. So this is the part where really we only need a shell script to do this, but I'm up to add a little bit of comic relief as we do it. Let's look at exactly what we talked about in terms of a developer experience. We'll hit the marketplace and see that there's an Apigee Edge service there. So what I'll do is simply create an instance of that. And I'm passing in a little bit of configuration so it knows who I am and what part of Apigee I'm going to talk to. So now that I have that, I've got a service available that I can use, demo edge right there. I have an app out here running, which is a simple little API. It's kind of similar to the old finger protocol if you want, where you can hit a URL, receive a little bit of information. So now as a developer, I've got this up and running, but I haven't really thought about authentication or anything like that. So the next step is to do a bind of throughout service. And I simply pass in, if you saw the demo yesterday, this is the same stuff. The result's going to be slightly different, but we'll pass this in. It's going to warn me that it might alter what happens with the traffic, which is kind of what I expect. So that's it. Now I can still hit this endpoint. Same thing happens. It doesn't appear to be any different. But if I come over to the Apigee side of the world and take a look at what I have, I have a new proxy out here that's been created automatically. And if I come in and look at what this has in it, I can see that there's actually some policies that have already been defined. This is different from what we did yesterday a little bit. We've exposed an open API specification from that app. And the service broker is smart enough to look for that and look for any kind of definitions about what I have in terms of routes or paths in there. So I can see that I've got a cache on the git side and I've actually got some security on the post side. So I'm requiring some sort of identification on that now. So if I turn on trace, I can kind of see what's happening with this traffic. And if I hit this fast enough, I should see there. Now I've got a little bit of traffic protection thrown on it. From a developer experience, super simple to do. From a consumer side, I haven't changed the way the consumer interacts with that at all. So I still share the same URL with them and all that stuff. And I get, you know, the benefit of understanding how people are using things and seeing that, you know, we're hitting a cache or we're populating a cache. All of that great stuff comes along for free. And that's it. See, it could have been a shell script. So, good morning. So Lote and I will spend probably about next 10 minutes or so to share with you about the API management use cases with the predicts in G-digital. So just a little bit back around myself. So my name is Kevin Yan. I'm a software architect working a predicts platform team in G-digital. So please show your hand. Have you heard about predicts? Oh, great. Thanks. So there's a talk yesterday, Mark Thomas Smith. So he did a lightning talk about predicts. So I just want to give you a very quick overview about predicts. So predicts is a platform as a service built on Cloud Foundry. So it provides a set of services and the practices to help people to develop industrial IoT application in the industrial space. So if you look at industrial space, end-to-end security is very important for us. So we won't have a very secure way to connect to an industrial machine on the field. Also, we need to provide very secure way for consumption in terms of different type of system of engagement. Doesn't matter. It's a mobile app or enterprise app. So from end-to-end perspective, they have to be very secure. So you may have heard about the smart machine, intelligent machine, deployed on the field in the industrial space. So what I mean by intelligent machine is, for example, like a wind turbine case there. It has a collection of a sensor. So the sensor can measure either vibration or torque or speed, even noise. So you constantly collect high-speed data. So what predicts and offers is to provide a secure connection directly into the machine. So we can collect high-frequency data. We call this a time series data. So if you see that on the close to the machine side, on the edge side, we provide edge analytics. So we can do a very quick response if we see any anomaly. So we can do a very quick response to basically inform operator or even sometimes to do real-time control as well to do certain thing about it before a big disaster happened. So also, we collect massive amount of data in the cloud side as well. So not just for a single asset, like single wind turbine. We basically collect a fleet of asset information. This is a massive amount. So we apply a very advanced analytics in terms of machine learning techniques to be able to predict the machine failure. So imagine if we can predict a certain machine going to fail, say, in several days, that will be drastically reduced on-plane downtime. And it's a great value for our customer. So also on the consumption side, we have a set of online services to help our customer to build visualization application, different set of application so they can consume the inside from the analytics as well. So as you can see, predicts is evolving very rapidly. So it grows very fast. So constantly we have a new service coming to predicts. So API management become very important for us in terms of how we consume the API also how we create a very friendly environment for our developer community so they can either develop new services or consume a new service for their app. So I'm going to pause here for a little bit so I like that Lothar continue. Hi, Kevin, thank you. Just for introduction, my name is Lothar Schubert. I'm also his GE Digital with predicts specifically focusing on the developer community and developer relations team. Just to build on what Kevin said here, what you see here is predicts, of course, a website and you mentioned most of you are familiar with predicts. Predicts actually went just into general availability this year. So it was really only launched in February for external GA, but now you can actually go, they create your account free trial, all the thing that goes with the regular platform. As you can say there, it's built on Cloud Foundry. It was really, really big for us that we built it on Cloud Foundry. It gives us lots of things. For example, the way we deploy predicts is we deployed in multiple data centers around the world as well. And certainly the multi-cloud approach that Cloud Foundry provides is huge benefit for us from operations, from a deployment perspective. And also the companies that we deal with in the industrial space, you can imagine us often a pretty conservative bunch in power generation and aviation, transportation, healthcare for good reasons, right? I mean, security compliance mandates, all those kind of things. But with predicts now based on Cloud Foundry, we are actually able now to bring the whole notion of CI, CD, and DevOps into those kind of industries and Cloud Foundry enables this really well. So it also became really a change agent for if you want cultural transformation the way this office is being built in those industries. So that's what predicts us. When in the GA, we have been using it internally for a few years and kind of three reasons why we built it here is compared to just Cloud Foundry purely. If you want, it comes down to three things. One, it's really Cloud and Edge. As Kevin pointed out, there's lots of Edge processing happening on industrial types of equipment. There's a need for real-time processing for storage, often where our machines operate as we might not necessarily have connectivity. It could be on an oil rig or it could be in a mining operations or things such as this. So we need processing on the Edge as well. So there's a whole notion of hybrid Cloud in a continuous way and secure connectivity. One thing that we needed to build in there. Secondly, is to meet specifically the demand from our customers from a security perspective that Kevin pointed out but also from compliance in terms of data storage, how they manage all the things. Three important, managing data on scale as well. So this is the second one. The third one, which brings us really to the topic is the services that we provide with predicts. So there's a whole bunch of services which are built on predicts which are delivered via our services catalog. So I'm pretty excited. You'll find the catalog of services on there and it's pretty straightforward, right? You go into the service. Once you log in, your provision, subscribe to the service essentially and then you can bind to the service. You can push the service in your organ space and bind to the service and off you go as well. Those services, majority right now is built by GE, by the predicts team or by GE businesses and you find some pretty basic stuff there like SQL Database, Postgres, but you also find more services specifically built for our use cases. Master Data Management for example, for assets, industrial assets which are asset data management service or for example, time series management which is really important. All things such as traffic management for intelligent cities, intelligent environments as well. Each of the services of course is being exposed to APIs and we have right now, I don't know, it's just 30 or 40 services there but if you look at the APIs, it's easily hundreds of APIs and more by the end of the year. And important also, those services increasingly will be delivered by partners as well. So the whole ecosystem which is evolving around this, right now you find for example, services by Pitney Bose, right, the FIG customer now they decided they want to deploy their own services into the catalog as well. Now all of this of course really good because, excuse me, as a kind of growth that we have seen, so February went into GA, we've been using predicts internally for quite some time there in different areas, aviation for example, in remote monitoring diagnostics of equipment, of machines, manage and run wind farms, get more output. It's actually pretty cool what you can do just with software updates, you get a few percent more efficiency out of your wind farm and if you think about all the power generated there, just one percent more efficiency in energy creation can power countries such as Canada for free. So it's pretty huge what you can do there. And we saw tremendous growth here, which is good of course here, 500 percent growth in the predicts developer community over just the last few months since the launch here. And we expect this to grow now pretty steadily with community that we put in there as well, forums, discussion groups, ability to share your own code and you're going to run our own developer conference, I had to do the picture, end of July, 2,500 people just for predicts. No, what comes with the growth and Kevin will tell us about that. Thank you, Rota. So I think everybody has experience about managing massive volume of API, also the massive volume of API invocation. So for our objectives, we want to maintain very consistent user experience to our customer. So it doesn't matter, it's a million-colored day, even a billion-colored day. We have to maintain very consistent SLA so that customer doesn't get the performance degradation due to because a high volume of request. And secondly, it's a fast onboarding. As Rota mentioned, we have a large service going to come in to the predicts and how we manage API, not just about new API, also how we manage about new service, like API versioning, they come from the service by the change API. So API versioning, new API introduction. So we need a very nice framework like Swagger, our customer, very easy to browse it, to use it to consume it. And entitlement is a big piece as well. How we manage entitlement in terms of track, right, track entitlement, also to be able to grant entitlement, also to revoke them. So entitlement basically is about subscription. Somebody could subscribe, I want to subscribe to predicts to use maybe a thousand assets per month, something like that. So that's the entitlement. So people can have a different need during different time period. They might want to grow the asset, right? So we had to constantly manage the entitlement here. Multitenancy, of course, and I think everybody deal with a multi-tenancy problem. So we want to have a very efficient way to be able to use the same infrastructure we have to try to maintain as many multi-tenancy as possible. Also the isolation of the API, right? Resources is important, part of multi-tenancy. And lastly, it's important about API analytics, because we're getting a large volume of API. So how do we know who's using what? What is the trending? What is the hot API I've been using? Somebody might use certain APIs three o'clock in the morning who want to know about that too. And why are everything so... Again, we have to maintain the end-to-end security. That's very important for us. So this kind of will show you the high-level view. I kind of overlaid the previous end-to-end architecture to show you where we kind of deploy our API management. So basically on the left-hand side, that's basically our primary, our machine connection. So there's a high volume of data coming into our predicts, either in streaming manner or in batch upload. So we have an API gateway there really to check the data is coming from the trusted sources. So when data are coming, we will establish identity right away. And then basically when data are coming to the cloud environment, so we'll have very nice data governance policy. We'll make sure the data access right is insured. So make sure this person has the right to use the data. And also on the consumption side, that's on your right-hand side. So we also have a very secure way for people to use our service as you show already in the previous catalog. So every service has a... Based on the token-based authentication and the security as well as a role-based access control. So there's a several key things here about, for example, service virtualization. We want to separate the API from the actual service provider as well. So our consumer can always deal with a very consistent API without worrying about the complexity in the back end. And authentication, as I mentioned earlier, then the play enforcement is an interesting topic. So we're constantly metering the API usage. So play enforcement doesn't mean we have to cut off the invocation. We can basically provide a way to tell our customer, say, when you reach to a certain structure or maybe a yellow, about 80% of usage, who can start sending our friendly reminder to our customer and remind them, hey, you're close to your subscription. Maybe you can do something about it. And since we're getting a large volume, so content attack prevention is important to be able to arrest the high spiking of the API request. So we can do something about it. Then SLA management want to make sure we have a consistent response time. And lastly, it's always important. It's a high availability and a scale for us. So we want to constantly grow the API service into a large volume. So that's a pretty much conclude our use. Thank you. Thank you. That was a great summary of how GE is creating an API run business. And so to summarize, we showed you today why API management is relevant to what you're doing with Cloud Foundry, how route services are a really key way for bringing these systems together and probably something that if you haven't looked too deeply into that you really should, or at least hopefully after seeing this presentation, it's high on your list of things to investigate when you get back to your offices. And finally, how an API run business, how GE is able to build on top of this to have a unified platform for delivering these services, managing monitoring and securing them. I think we have time for one or two questions and then afterwards all of us are available and you can find us in the hall if you've got additional questions. Yes, there's a microphone right there I think if you want. Yeah, it was a question on what kind of traffic the route services support. Today it's through HTTP routing. I actually just checked with the product team this morning to ask about some of the container-to-container networking and how might that play in the future with this and we are thinking of ways to also let that get injected in the path if you wanted to. So, today it's HTTP traffic. In the future, though, as we do more container-to-container things, we don't want to lose this or have too many opportunities to bypass it. All right, thanks everybody.