 Hey, everyone. I'm Kenny Bistani. This is Prithapal Pogil. We're going to be talking today about managing the complexity of microservice deployments using Cloud Foundry and Apigee. So I'll go ahead and introduce myself first. Again, I'm Kenny Bistani. I'm a spring developer advocate at Pivotal. I also wrote a book called Cloud Native Java. So I thought I'd plug that now. I'll be signing these, I guess, reduced free chapter versions after this session. So if you're interested, come by the Pivotal booth. And yeah, go ahead. Hey, guys. Prithapal Pogil. I'm part of the Apigee product team, part of Google now. So super excited to be here. Today, we're going to be talking about microservices, and then we'll get into how API management plays into that space. And hopefully, it should be an exciting session. So we're looking forward to it. Awesome. Kenny, thanks. So I'm going to start out talking about the history of why, really, we're doing microservices today. And hopefully, that leads into this discussion of why an API gateway is so important. All right. So here's the agenda. We're going to talk about monolith to microservices first. And I think it's important to note that not all architectures that aren't microservices are monoliths. We also have things like SOA. We have services architecture. So not just monoliths. And there are some companies who are trying to move from this services architecture that's not microservices to microservices. So I'll be talking about that a little bit. And then why API management? Prith, Paul is going to dive into the solution with Apigee API gateway and Cloud Foundry. And then we'll wrap it up with some key takeaways. We have 30 minutes and a lot of content, so I'm going to go a little bit fast. All right. So we started with this. We started with the monolithic application. And you're all familiar with this today. Now, if you're working with a monolith today, you'll notice a lot of the pains that come with this. And it has to do with shared ownership. Sharing ownership over infrastructure, sharing ownership over is the source code. And that really causes an issue. It causes us to go slow. So here in this example, I have a Apache Tomcat server. In the center of that, I have a war deployment. Now, inside that war deployment, I have separate components, the modules of the application. And let's say this is a single source code base. And maybe it's a million lines of code. Now, the solution is very large in terms of code. And so that's going to be very difficult for developers to work on together. But more than that is that we're sharing a release schedule. So I call this taking public transportation to production, but the bus comes four times a year. And that's the primary issue with a monolithic application. And so first of all, it's going to slow our velocity getting into production. So if we have to share that release schedule, share that source code, share that infrastructure, it's really going to slow us down because we have to coordinate more. If something goes wrong, if, let's say, one developer changes a line of code, and that brings the entire application down in production, that's a big deal. And it's going to slow us down because of it. It also takes way too long to ramp up new engineers. So with microservices, it's ideal to be able to add an engineer to a project, have them reason about that source code within a day, and be able to work with it, as opposed to a million lines of code, where it's going to take much longer for a developer to really understand what that code does before they feel safe to be able to make changes. But I want to hit on this. All teams share the same infrastructure. So we have one production environment where our monolithic application is being run by a single application server or multiple. But the idea is we have one way to get to production. So if I change one single line of code, I have to deploy everything or nothing at all. So that's the next point. So this is the main issue with monoliths. You deploy everything at once or nothing at all, and that's the primary issue. And so on the way to microservices, we moved to the SOA. And we got a little bit better in terms of infrastructure. So I have three applications here, an accounting service, inventory service, and a shipping service. And so we've split up that infrastructure, that single monolithic application, into three separate applications. But the problem over time with the SOA is that there at the bottom, we're still sharing on libraries. We're sharing the objects in our domain. And these different teams working on these three separate applications may need to change one of those domain objects to support their functionality. And so if I change the customer record or the accounts record, then I just deploy the accounting service. And so I get that benefit of being able to deploy independently. But what happens if I make a change to the address record? Now I have to deploy all three of these applications at the same time in a coordinated release. And so over time, the SOA makes it harder than the monolith. And so now we've arrived at microservices. And the idea is that small teams are organized around business capabilities. And so most of you know what microservices are today. But the idea really is velocity and not sharing anything. So we're going to move to something called a share nothing architecture. That is, we're not going to share our libraries either. What we're going to do is we're going to create an economy of applications that produce APIs and consume APIs. And so we'll have many small services organized around business capabilities. And they'll expose their functionality to the rest of the applications via REST API. And the key thing here is that these teams need to be able to build and run their applications. There's all these things that you need to really support a practice of building microservices. And this is the most important one, with implications to cloud boundaries and cloud platforms, is that you have self-service on-demand infrastructure. You give everything to your developers that they need to be able to build and run that application. And so here's an example of microservices, a microservice architecture that's cloud native. I put this together for the cloud native Java book. This is an online store. It has 10 microservices. In that middle layer, I have my platform services. And so when I talk about giving developers everything that they need to build their application, that middle layer is that marketplace. These are the platform services that these developers can provision on demand and be able to plug into their application without having to implement it over and over and over again in each application. And the one that we're going to be talking about today is the API gateway in the center. And what this is going to do is it's going to hide all the complexity down below with my domain services so that the front-end developers don't have to worry about that. So if you have 500 microservices and they all have APIs, we don't want the front-end developers to have to see that complexity. We want them to see a single contract, an API contract, of that domain. And so we can use this API gateway to reverse proxy into these back-end services. And so from the front-end application, if I want to use the catalog service, I go to forward slash catalog at the API gateway or the account service. And so I have a way to combine together all of these APIs from the separate services into one API contract. Now there are two popular ways of going from a monolith to microservices. One of the ways is splitting the monolith. And that becomes very painful over time because you have more than just the application. So most monoliths are on a large shared database. Actually almost all of them are. And the idea is that not only do you need to split off functionality from the application, but you have to extract out these tables from that database and migrate them to new databases. And so over time, it becomes very difficult to split up a complex domain because you have all these foreign key relationships that are running across tables. And not only that, in the back-end, you also have a data warehouse. So you have ETLs running from third-party systems into this large shared database. And so over time, it becomes very, very painful to go to microservices by splitting the monolith. And so there is another strategy called strangling the monolith. And this was first proposed by Martin Fowler. Back in around 2002, he went to Australia on vacation, and he saw this plant here. This is called a strangler vine. And what the strangler vine does is it seeds itself in the upper branches of a fig tree. And it works its way down the trunk of the tree all the way to the root system, extracting the resources that the tree is producing. And the benefit that it gets is it doesn't have to grow up from the forest floor. It works its way down from the top of the tree. And so what Martin said is that what you can do is take this philosophy, gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled. And what you really need to do this is call it an indirection layer. Now not all architectures are monoliths. And so here's an example of an SOA that's migrating to microservices. And here we have an ESB, which is usually evil. But in this case, it's going to help us because it gives us that indirection layer, that place where we can seed that strangler vine and start to gain control of extracting out data from this large shared database. And so with this pattern, what you can do is create an edge between your legacy system and your new microservices. And that legacy edge can do useful things, like adapt formats. So if you have legacy formats with your microservices, you'll be using modern formats. And so this legacy edge adapter can be an API gateway that translates between these formats. But the goal here is to migrate data away from the large shared database using something called a cache pattern, or gateway cache pattern. And what you do is, just like a cache, you'll reach into that backend if you don't have a record in your microservices database. So let's say I'm trying to get a customer record. And it's not in my microservice database. What I'll do is I'll look at that legacy system. I'll take control over a single service. Maybe it's the customer service. I'll request that object from there. And I'll save it to my microservices database. And then on the subsequent request, I'll go there instead of going to the legacy backend. And so you can use an API gateway to do this. But what this allows you to do is you move all your new feature development to your microservices. You don't have to worry about splitting that monolith over time, because that's a lot of undifferentiated heavy lifting. And so what you can do instead is to start to build your new microservices and use this legacy edge to reach into that legacy backend and begin to migrate your data away into your microservice layer. And eventually you'll be able to reason about what's the cost of shutting down that legacy system. And so you can use something like Bivital Cloud Foundry to do this. So here's an example of PCF. Now everything in Cloud Foundry really exists to reduce the level of undifferentiated heavy lifting that you're doing in your application development. And so you can use tools from Cloud Foundry to do that with that marketplace, such as Apigee Edge, which Prithpal is going to talk about now. Awesome. So can you spend some time taking a look at different strategies for moving from your monoliths to microservices? So we feel API management has a pretty important role. Let's examine how that actually works. So one of the first things you do as part of app modernization is start to build out microservices. And that's great. But if you focus more on the microservices, it is an architecture style. It's an approach. And you start composing and building out business logic within the microservice. It's well-contained. So where does an API really come in? Well, API becomes what Kenny had alluded to as a contract. That becomes the way your consumers, whether the consumers happen to be across different teams within your enterprise. Or if the microservice now starts to enable a capability outside of the enterprise boundary, that becomes the contract. That becomes a mechanism that app developers, which are building apps against those microservices, use that API as a contract to communicate with their microservice. In some ways, we feel API and microservices are very complementary. What do we really mean by that? Well, API existed even before app modernization as a strategy became commonplace, which means yes, you may have services which are either legacy-oriented or you may use something like SOA or a layer similar to that, you are still accessing that through different APIs. As you now start to move into the microservices world with app modernization, especially leveraging platforms like Cloud Foundry, API and microservices become complementary in a way that you still access them through standards-based HTTP REST endpoints. So from that perspective, that becomes your outside view in into how to access them. One of the other benefits of leveraging APIs is they shield the consumers from any kind of microservices complexity. What do I really mean by that? Well, the reality is for some period of time you're going to have the existing enterprise legacy systems, maybe some middleware, but you're slowly moving over to microservices. During that transition, it becomes important to continue business as usual, which means if you're exposing those capabilities using APIs, as you transition into microservices, you are still able to offer the same exact kind of level of service, yet you're refactoring into a more modern architecture. Over time, you may either change stacks or different teams as they come in. So APIs kind of become this layer of installation where as you're on your app modernization journey, you're able to kind of paste yourself when you move into microservices, APIs start to shield that kind of complexity. The other part is, as an API contract, you specify a specific version that customers or consumers of the APIs are accessing for hitting some capability. At the same time, you're able to move forward in your newer versions of microservices. And when the time is right, make that switch for new consumers to start using newer versions. So what that means is you're able to move faster at your pace and still be able to hide the complexity. Just like microservices, you go through different phases in the lifecycle. The moment you expose that as an API, every API has a lifecycle. This is important. Why? Because you want to be able to ensure that you have core set of capabilities to tackle all the phases of the API lifecycle. It obviously starts from a design, in which case the swagger or open API, as it's called now, is a very common standard which many API developers are using to expose their APIs. But as soon as you start to compose these microservices as APIs, you need to ensure they have the right level of security, which means it could be making sure that everything is protected using OAuth as an example, or it could be ensuring that we have the right set of traffic management policies. As an example, you know your cloud native platform can scale infinitely at some level, but you still want to be able to perhaps provide for some common sense quotas on how much an API can be used. Maybe that's tied into the consumer and the kind of contract we have with that consumer. Maybe it's a gold tier partner versus a platinum tier partner. So you have some traffic management needs which are beyond just technical traffic management. As you start to get into securing your APIs, then you want to ensure that the APIs are published. That's usually done through a developer portal. So as you start to look through the API lifecycle, there are very important aspects of each and every element of the lifecycle which need to be solved for. And the Apigee API platform is the full API lifecycle platform which gives you the ability for you to be able to securely expose the APIs and be able to offer API packages which can be consumed by the app developers on the developer portal. Analytics gives you the end to end visibility. What that really means is as soon as you start having an Apigee API platform and the API gateway, the core component of the platform, in front of your microservices, you now have instant visibility into microservices and API usage. So instead of now going to a third party system to figure out how many APIs are being used by what partners and what kind of apps are being powered by these APIs, you get instant visibility into not only the operational aspects of those APIs, but even the business metrics. Customers can use some of the cell service tools which are built into the analytics capability to build reports for businesses. Let's say you are working with an auto API as an example. And you want to be able to very quickly surface a report. Give me a breakdown of all orders in this region over the last month and a half with an order amount greater than $1,000. That's something that someone can go into the tool, use a custom report, a build report out from the order API very, very quickly. That's the power of the analytics platform. As you start getting into more levels of maturity with your API program, you run into situations where you may want to monetize your APIs. We have many different examples of customers who are leveraging the API platform to monetize the APIs, Walgreens, First Data, MapQuest, many others who are leveraging those capabilities. So the Apigee API platform is delivered in three different flavors. You can use that as a software as a service completely in the cloud. Or you can use it completely on premises or use pieces of it which are in the cloud and some on premises. So Apigee provides the API layer for microservices. And we essentially have two different form factors. One is the Enterprise API Gateway, which is traditionally used in a DMZ layer in front of Cloud Foundry or your legacy stack. And then you also have a smaller runtime, what we call as the Apigee Micro Gateway. This is something which can be deployed close to your target machine, which is where your microservice is running. And it gives you all the illities of security, traffic management, and analytics to those microservices. So from a relationship perspective, Apigee and Pivotal have been working for the last couple years. We've been partners in a few different areas. We've had prioritize integrations that we have rolled out for the last 18 months. One such example of that offering is the Apigee Edge Service Broker for Pivotal Cloud Foundry. So it's available on Pivotal. I think we are in version 2.1.1 today. This offers two different styles of integration patterns. The first one is through route services. So using route services, you configure the tile and you create a service instance. We have two different plans supported, one for the Enterprise and one for the Micro Gateway. As a Cloud Foundry app developer, you build your app. You do a CF push. Once the app is up and running, you can use these route services as integration. If you have route services turned on, you can just issue a bind route service command, passing in some key information about the Apigee org. And when you do that, we automatically generate an API proxy within Apigee Edge. And the routing table within Cloud Foundry gets updated. What that really means is the moment you push the app and you run bind route services, you actually have an API proxy, which is deployed for you in Apigee Edge. Any time you start now hitting that API endpoint, which is nothing but your microservice endpoint, the call gets intercepted without services. It pushes that to Apigee Edge, where you can use all the different out-of-the-box policies we have. So we have over 30 different out-of-the-box policies to do security, transformation, traffic management, mediation, et cetera. It hits that layer. You do all the API management concerns out there. Then the request gets routed back to the app. So by leveraging a route service with integration, you're slowly weaving in API management automatically for all your Cloud Foundry apps. Then the next one, the Cloud Foundry Decorator Buildpack. This is a deployment style. We are planning on releasing a generally available addition very, very soon. But this leverages the meta-buildpack capability within Cloud Foundry. And by this pattern, what you really can do is you can take Apigee Edge Micro Gateway, which is a very, very small footprint, and it can reside or we co-resident inside of the CF app container. This is a great way for you to distribute the API runtime. And as you scale your Cloud Foundry apps, this scales with it. So this is a great way of separation of concerns. You're building a microservices. You push it out there. You configure certain policies. And the Apigee Edge Micro Gateway can protect, secure, and do everything that you need for that microservice. At the same time, pushing analytics into a common plane, which can be accessed by everyone in the organization. The second offering that we have a little bit related to more around Bosch. So we have an Apigee Edge installer for Pivotal Cloud Foundry. You can leverage this and install Apigee Edge right next to your Pivotal Cloud Foundry foundation in the same way that Bosch manages Pivotal Cloud Foundry, you can use Bosch to manage Apigee Edge. So all the benefits of manage, maintenance, self-healing, et cetera, now get passed under that. This is a pattern which is used by a lot of our on-premises customers. And this continues to get more and more traction. So use API management as a transition to microservices. I think this is a very, very important bring home slide. And as you saw Kenny walk you through, the different pieces of strategy and patterns that you use to now deal with microservices and enable certain key capabilities. Reality is business as usual needs to continue on. And for many enterprises, this seems to be a very, very common pattern. They leverage Apigee Edge in the front out there for any horizontal API management concerns, which means if I want to do general threat protection, general kind of security, and traffic management, whether the request is going to Pivotal Cloud Foundry or to my legacy app, you can use Apigee Edge in the front. This becomes your layer of enforcement. This also becomes the layer which will cater to all the contract. One of the other things which Kenny pointed out was the gateway cache pattern. This is another classical use case where you can just configure caching out here. So data from a legacy backend and the one which is needed by microservices could very quickly be cached on the gateway layer. Another big advantage of this approach is many customers are using this to be able to justify decisions on how much stuff they want to move over to Cloud Foundry. So let me give you an example of that. By using the analytics provided by the platform, they're actually able to see in real time what kind of API traffic is served from the legacy platform versus what kind of API traffic is being served from the new Pivotal Cloud Foundry deployments. So they're using some of those data points to be able to make decisions on how much traffic needs to move over to the new platform and thereby leveraging some of the capabilities provided by the platform. So these kind of integrations that we have jointly built and continue to iterate on have benefits for both sides of the house. If you look at the developers, they are able to use some pre-built traffic management, security, policies, and analytics, and apply them to their microservices irrespective of how many of them they deploy. Once you get into a level of maturity with your API program, you're able to then leverage some of the monetization capabilities of the platform to now start charging for APIs and monitor them. And even if you don't have a public API program, many cases we have enterprises who have a shared services team, which are offering the platform for the individual lines of businesses, you can use the monetization capabilities to provide metering, chargebacks, and all those capabilities. It becomes super easy. And then last but not the least, provide a catalog of APIs for developer contest services. Make sure the API are working the way they should. For the operators, especially leveraging the Bosch installer pieces, they're able to apply a common set of security practices in a horizontal concern. They're able to scale these kind of features out. And more importantly, get the same kind of administration look and feel of how they are managing Cloud Foundry for the same for Apigee Edge. In closing, Kenny, if you want to come up, let's just wrap up the key takeaways. So just to wrap up, the first point I made about architecture, going from monolith to microservice, not all architectures are monoliths. If you do have a monolith with a simplistic domain, it makes a lot of sense to split it up. But if you're a large company and you have services all over the place, maybe 300 plus services, some of which you don't even know what they do anymore, this is a very valuable pattern to use. So the Strangler pattern, too, over time, move that data away from that legacy system and using an API gateway cache pattern, you can move that out of that system and put it into your microservices. Yeah, so the last couple of ones real quick, right? API and microservices are super complementary. And as you start to modernize your apps, remember, the way to access your microservices is through a well-established contract, which is nothing but an API. And as you have seen in some of the examples we have covered, many customers are doing that today. So use API management as a strategy. As you modernize your apps, as you transition to microservices, it's going to make it easy to kind of realize the benefits of app modernization. So thanks for attending the session. And we'll be around for a few questions. I think we have about five, four or five minutes. If you do want to ask a question now in front of everyone, there's the microphone up here. I was told that they're not running it anymore. So we'll be kind and maybe run it. Is there any questions? OK, wonderful. So we'll be taking questions up here if you are interested in any of that content we just talked about. So thank you very much. Have a great rest of your conference.