 everybody. We're going to get started here in just a minute. I'm going to give some time for everybody to join. If you have any comments make sure to leave them in the in the chat and we'll try to save some time at the end of our presentation here to address them. So like I say we'll just give it another minute. I hope everybody's enjoying all the sessions today. Like I say just get started here in a moment. Hang tight. Give it another 30 seconds or so and we will get started. All right everybody. Thanks for joining today. We are going to have a great presentation coming up. It is the title is Locationless API Management with Red Hat Three Scale and Red Hat Service Interconnect. I've got Bomzy here who's going who I'm going to turn it over to. Go ahead Bomzy. Thanks Greg. Hi everyone. My name is Vamshi Ravla, Principal Technical Marketing Manager here at Red Hat. I focus on technologies related to API management, app connectivity and you know app foundations, application services, everything that is related to app development at Red Hat and I have been with Red Hat for the past seven, almost eight years now and you know it's been a fantastic right and I'm excited to present the concept of Locationless API Management to all of you today and you know hopefully have some good discussions towards the end of our session. So if at any point just a couple of checks. If at any point something is not visible, something is not audible, feel free to you know pull it out of the chat or Greg will also come out of mute if something's not visible and let me know. So let us know if there is some technical glitch and we'll sort it out in the middle of the session. Okay with that said let's jump into the actual content of the today. So what we'll discuss today is you know first we'll discuss an introduction to API management. You know some of you or majority of you have heard about API management but for the benefit of others who have not you know heard about it or are a little confused about the topic. Do a quick intro. Some challenges with the existing solutions or some restrictions that current vendors pose with respect to API management and introduction to the concept of Locationless API management. The need for it and how Red Hat addresses it, the demo use cases and so on and so forth. And as once we go into the middle of the session today you will actually see a live demo. Pray to the demo gods right that everything goes well but you'll see a live demo on how we achieve what we call Locationless API management. So with that said let's jump into the session. So we know that you know APIs are like a nervous system to all the businesses today right. Every company that uses technology today does APIs in some form or the other and APIs have become super critical to every business or soon every business right businesses that don't do APIs today. At some point we'll start building technology and you know start adopting or building APIs wanting to share them etc. So APIs are fundamental just not a technical constant but are also very fundamental to every business out there and very important from a business point of view. Customers and partners you know depend on it a lot of customers use the APIs of your technology or product to do different functionalities partner use your APIs to you know deliver value added features or value added functions and APIs essentially help your product become a platform you know by the value add services that partners provide and also gives flexibility to your customers to you know add their own things on top of what you actually offer when you expose your APIs to customers. And like I mentioned it is fundamental to achieve for a company to big APIs are really fundamental for a company to become agile competitive and like I mentioned right it is the foundation for a product for any technology to become an ecosystem on a platform and APIs are key there and a lot of web and mobile apps depend on APIs so anything that you do you mean we mostly know right if you are attending this and we are technical to a certain extent but almost all products that we interact today have are interacting with some kind of APIs. So I just wanted to you know talk through this slide about to set up the stage for how important APIs are but when you when you think about APIs right it's it's it's not just about oh I've created an API I mean the slide says HTTP rest endpoints but but it could be HTTP, GIPs here anything so any any any of your APIs right it's it's not just about oh I've created the API I have exposed it I'm done my job is done it's it's not just that right creating and you know offering the API is just the start you need to think about a lot of things such as security and authentication how do I secure my API the what type of security what type of authentication do is it a key based authentication is it a open ID connect based authentication what are the different versions how do I do a versioning of my API and what is the SLA behind different APIs documentation if somebody wants to use my API how will they learn about my API without you know reaching out to the engineering team directly the documentation is very important a portal or a developer portal where if you are exposing your APIs externally or internally or even internally to other teams there should be a self-service portal where people can sign up to your APIs you know look at the documentation look at the usage of the APIs etc you know more things such as reliability policies you know header modification authorization using JWT tokens rerouting except all such policies are also something you need to think about testing metering and billing if if you want to monetize your APIs you need to think about metering and billing alerts if based on usages down times of APIs you know the whole lifecycle management access control so all these things are something you need to think about when you're thinking about oh I need to create an API and expose it to say my customers and partners but it's it's just not that and all these things are something we need to think about and that is where API management really shines right API management covers majority of the issues for you in a single offering and that's why it's so important that you have an API management tool today especially if you are planning to build out a very strong API program because it covers a lot of a lot of aspects that that this slide shows under under the iceberg right like it's it's just you know exposing it's just the tip of the iceberg but it covers a lot of aspects under that are covered by API management so again this session just just to give a heads up I'm not going to do a deep dive into API management of such because I've assumed people who've come to this session know or have heard about API management or are using API management and rather you know you know drill down into a specific aspect of API management right so API management at a 10,000 feet view is excuse me just something to check right sorry that was a simple technical glitch but so API management at a 10,000 feet view is you know you have typically a data plane which is your API gateway and API back and together and then you have a control plane which is your API manager who takes care of all the documentation portal monetization analytics etc so typically most of the API management platforms are divided into these two aspects with control plane and the data plane and today's session we'll focus a little bit more on the the data plane and specifically the connectivity between the API backend and your API gateway and your API management platform right how how do we achieve this connectivity what are the challenges or the restrictions that current vendors that current vendors provide and sorry I just have to close my chat because it keeps thinking I'm sorry about that but that current vendors the challenges that current vendors the challenges that we have with current solutions and how locationless API management tries to solve that right so let's so again long story short this is not going to be a deep dive into API management but more about the connectivity aspect and how restrictive some of the solutions are so if you look at the current approach some of the vendors today will ask you to deploy the gateway in the same environment of APIs so that you know the APIs are easily discovered by the API gateway because you know you do not you only want the only public exposure for your APIs that you need is through the API gateway you do not want to expose your API backend first publicly and then add that to the gateway and then expose it through the gateway so some some some kind a common pattern is people deploying the gateway in the same pattern of API so that they don't they can access them locally within that environment and they don't need to create any public URLs for the API and the only access point externally would be through the gateway and that's kind of very restricting in a lot of ways right because again if you want to deploy your API back in somewhere else other than the gateway or if your API gateway is not compatible in certain environments where you want to build your API it kind of restricts you the other thing is some of the vendors today force you to you know build APIs using their proprietary vendor runtime so for example they say oh you have to use our platform to build your APIs so that you know you can manage your APIs using our API management platform and that is again super restricting today right because you know one you don't get the choice to build APIs in the language of your choice and the environment of your choice and you're seriously restricted by the vendor who says use our runtime or our specific scripting languages otherwise your API our API management platform will not be able to manage your APIs and the third challenge that again some of the vendors ask you to do is you know in order for your in order for your API to be discovered or managed by our API gateway and the platform need to publish your APIs to an exchange I mean that exchange could be a public exchange or a private exchange managed by the vendor but it's a mandate it's a mandatory thing that you know it's a mandatory thing that some vendors push you to you know publish the API to a certain vendor exchange for the APIs to be managed by the gateways and again that is that is super restrictive right you don't want to you know maintain another piece of architecture or publish your APIs to another exchange and and you are super restricted by you know the conditions that the vendor poses you so you want to have a lightweight environment and not have an extra piece of infrastructure like this such as the vendor exchange or if it's a public exchange you don't want to you know publicly expose your APIs also so that's that's another challenge and finally you know if you really want if you really want the flexibility say you're deploying your API back in in different environments and you want your API gateway to you know discover your API back ends as if they're in the local network you have to set up complex VPNs firewalls VPCs between the APIs and API gateways which which are which is a good solution but a very time consuming very complex again the development teams are restricted by the velocity of the networking teams and it really restricts the speed at which you know you know the API teams can progress because you know they have to reach out to the networking teams about okay can you set up a VPN between my API gateway and API back end then you have to give a business reasoning etc and and its VPNs VPCs firewalls are not that easy to you know configure right so these are the challenges that the current existing solutions or restrictions you know pose you with in terms of giving you the flexibility to deploy your APIs anywhere you know that to be managed by the API gateway and and that's where you know the concept of location less API management comes in right so what does location less API management let's go to the definition so location less API management epitomizes apparent dramatic evolution endowing organizations with the liberty to orchestrate API deployments devoid of geographical constraints this transformative approach seamlessly accommodates the whole definition right and I'll give you a minute to read this not a minute I'll give you 10 seconds to go through this and then after reading this you know after completing saying the imperatives of operational flexibility within the sophisticated tapestry of modern API governments you'll be thinking what in the world is this guy talking about and why is this definition so complex I actually this is a made-up definition I wanted to make it as complex as possible just for you know heck of it but it's it's it's actually not not the actual definition of API management but what in the world is location less API management and why should we be very simple and not as complicated as this definition that we see here right so so what is location less API management so location less API management means you know you should be able to manage your APIs distributed across multiple clouds and clusters without any restrictions wherever in there so for example you have your APIs that are deployed in AWS your APIs are deployed on Red Hat OpenShift on GCP on Red Hat Enterprise Linux VMs demilitarized zones mainframes legacy systems whatever your APIs might be deciding their API management platform or the API managers and the gateways should be able to discover and access your APIs without any trouble and you should have the flexibility to deploy your APIs without compromising on the functionalities of API management that is a key key feature of key key aspect of location less API management and the other aspect of location less API management is you know to reduce the complexity and increase the visibility of your APIs because as I mentioned you know one of the complexities of public facing API management is you know you've seen the challenges that we've seen with vendors you know like using the appropriate return times using vendor exchanges building complicated VPNs firewalls or sometimes exposing the API using a public URL externally and just praying to God that you know you know nobody discovers it and just add that URL to your API management platform so there's a lot of complexities involved in public facing API management and then location less API management seeks to reduce that and you'll see how that is and the other aspect of location less API management is to create discovery of your APIs easy and create a single pane of visibility for your APIs scattered across different environments through your API management platform so those are the key aspects of API management location less API management right one is the flexibility to deploy APIs everywhere and then at the same time increase the discoverability and reduce the complexity of your public facing APIs so now that we have an understanding or a high-level understanding of what location less API management seeks to achieve what is the real need for it so there's a lot of things that you can talk about but I think these are the key things right one is compliance for example say you have the government regulate that you are in a country where the government regulates you to you know deploy your API only closer to where the data sits and the API should not be deployed anywhere else except for that region but your API management platform is in some other region and it creates a challenge for you so in that case the need for location less API management is super important so you have your API management wherever it is and you have your APIs and databases close to each other in the regions that are approved by the regulatory body and then you can you can create that connectivity using you know what we call location less API management again data residency regulations you know GDPR is another example but that's one of the primary needs for location less API management and another thing is security I mean security is common for API management in general but it's more so important for location less API management because you know you have diverse environments that and your APIs are dispersed across different environments because you know you know in large organizations people like to deploy APIs I mean you have different teams and each team likes to deploy APIs in different environments and you know the security aspect of how your APIs communicate with the API management platform and the communication and the transfer of data between these two platforms should be protected and should protect all the sensitive data right so it's even more important when you're talking about dispersed API is the security aspect which is very important and finally costs for example if you if there is a situation where you cannot move your data to say to say to a cloud or a platform where your API management resides just because of the costs involved or if you want to move your APIs to a cheaper a cloud vendor where you have certain discounts but your API and you're using API gateway from some other vendor who doesn't support on that cloud what would you do so all these things are called for I mean called for the need for a concept like location less API management that gives you the flexibility to deploy and you know helps you achieve compliance and security so how do we go about location less API management or rather how does Red Hat try to achieve this so before and this is where you know hopefully the fun part and the interesting demo comes into picture right but before I actually go into the demo I'll talk about a couple of products from Red Hat that we use in the demo so that you understand what what we are using for for what here and then you can we can we can dive into the details of the demo so the two components that we'll be using in our demo today is Red Hat three scale API management is a complete API management platform which again like I showed earlier has two two different two key aspects right one is the data plane and the control plane the data plane is you know if you can see my mouse pointer here or let me get a laser yeah if you can see the mouse pointer here this is the API gateway is the data plane and the API manager is the control plane so the API gateway mostly is concerned about you know your consumer API consumer apps making requests and it authenticates with say using an API key or say open ID connects single sign on using single sign on tools such as Red Hat I mean single sign on key cloak etc and once once it is authorized it it goes to the API backend and returns data to your API consumers or developer apps that you have and because the data travels through this and the request and response travel through the gateway we call it the data plane and the control plane is the API manager itself which covers a lot of aspects such as you know the admin portal admin portal is where you know as an API provider you can control you can look at the analytics and billing of your API you can manage the keys of the developers who are using your API say a developer has left an organization or no longer using and using your API or his plan or subscription has expired you how to delete the keys a CMS management for the developer portal and and a dashboard to have a view of all the API's that they are managing and a developer portal again is a part of the API manager which will provide you you know where you can provide documentation a self service a self sign up mechanism for your API consumers active docs where you can look at the documentation dry samples of your API so that it developer portal helps you expose your API's externally or I mean outside your team or outside your organization and creates an option for self sign up for API consumers that can you know sign up for the API's get the keys and you know look at the analytics of the API's request chat with the API providers if they have some concerns etc and because this is all related to you know controlling the aspects of API management and and the the whole concept of API management we call it the control plan so the API gateway and the API gateway is the data plane the API manager which contains the admin portal developer portal is the control plane so that that's one component that we'll be using in the demo today and the other component is you know red hat service interconnect so red hat service interconnect is again a very important aspect and actually what enables locationless API management rate it is a layer seven you know interesting connectivity tool that red hat provides basically if if you want if you want to connect to services that are deployed across platforms to a service if you want to connect services that are deployed across two different environments without creating VPNs and without using IP addresses service interconnect is your answer because it uses layer seven addressing it instead of routing IP packets between network endpoints it uses the service names or application addresses to route the messages between the routers that it installs in those different environments and the interaction between these routers is you know it's it's it's mutual TLS encrypted so it is super secure and you know it's and the access to the granularity of individual services and endpoints is what you know it really shines there because you know you want to connect two services and not two you know say for example not two Kubernetes clusters or not a VM and a cluster you only want to connect the services within those environments and that is what red hat service interconnect helps you achieve with the layer seven interesting and you know it helps you go granular to the individual service endpoints and there is no implicit trust granted based on physical or network location what I mean by that is you know if two things are in the same network there is no implicit trust you have to using red hat service interconnect you have to explicitly give permission for one service to access the other service you know by by you know exchanging certificates and establishing the connection and it is agnostic of the environments IP versions and that's why it enables the portability of your applications so red hat service interconnect and three scale is what we'll be seeing in our demo today and now let's jump into the demo part I'll be not I'll not be looking at the camera for some time because I'll be looking at my screen where I'll be doing the demo so if I again if there is something that you can't see or if you can't listen just spin me and I should be able to do that all right so currently here's the situation right so we have three different environments where we'll deploy you know a corkis api a node js api and so the corkis api will be an open shift on azure or on azure in our case in this demo and the node js api will be on a rel virtual machine and I've deployed three scale api management on open shift on aws so all these environments are not connected and now what we do is the the apis that you see here the corkis api and the node js api that are deployed in different environments three scale will be able to discover them and manage them without any public exposure for these apis using local open shifts and let's see how to achieve that so if you before that I'd like to show something right if you so what happens is if if your api if you recollect what we've spoken about in the challenges section one aspect is if your api and your gateway are deployed the simple solution is you know your api and your api your api gateway and your apis being deployed in the same environment so with three scale what happens is if you if the api is deployed in the same cluster as three scale you can three scale automatically discovers the api so for example so for example I have a I have a payment processor api here that is deployed on the same cluster as three scale as you can see I've deployed three scale in the same cluster here and I've deployed the the payment processor in the same cluster again and if you see three scale with some certain permissions is able to you know discover that api within automatically because you know it's deployed in the same open shift platform now obviously we've given some explicit permissions for three scale to discover that and three scale is able to do that but that's that's not what we want right we want our apis to be in different different environments not in the same cluster as three scale and three scale in spite of doing that with locationless api management three scale should be able to discover your apis irrespective of wherever they are and let's see how how we would achieve that so again I what what I'll try to do is I'll try to you know copy paste some of the commands because again you know with the demos with the sping single spelling errors anything can go wrong so what I'll do first here is you know first let's try and the green color one as you can see here the green color one is my aws cluster blue is azure where I've installed open shift and the orange color terminal is for the relvm where we'll deploy our Node.js app so first let's deploy first let's deploy the quarkus app on our azure cluster so let's see that I'm going to the quark the quarkus app is basically it will this this will create an api which will list which will show you a list of fruits here as you can see it's a little bit blurred but it'll show you a list of fruits that's our quarkus api I've created simple apis so that you know you understand you can see the difference between the the apis that are deployed in different environments so the apis deployed in the azure environment will show display a list of fruits and their description and let's also install a Node.js api in our rel machine as you can see the orange terminal is our rel machine and if you're wondering what these terminals are again I've logged all the Node.js app here again I'm deploying it as a container using podman here but but you can also deploy natively on open on on rel that red hat service interconnect doesn't uh doesn't matter for red hat service interconnect to create the connectivity if it's a container or not so uh let's let's give you a few seconds for apis to get deployed there you go so let me just open up another here let me just open up another window for our rel machine all right me all right me just create another tab because you know the server is running here I want to run certain commands on it and uh let's try to run that and I'll try to log into my rel machine again I'm going to SSH into my rel machine just give me a second there you go I'm going to blow up the screen a little bit again and let's also add the color to it so that we yes so our apis are running and in the rel environment we have our api running and open shift let me try to minimize this a bit so that it's visible here yes I think sorry about the the minor glitch but I hope everything's sorted out all right so we have our api running in rel and we have our api running in the azure environment let me just clear this up so that this goes to the top of it and let's try to see if our apis are running so let me go to the azure cluster here this is our azure cluster which is running our caucus api so it currently has a public route which I'm going to delete so as you can see the caucus api here displays a list of routes so what I am going to do now is I'm going to go to the public route and say delete route so that our api is not exposed to the internet at all because that was a premise and then you see we've removed the access so this so our api is currently only visible to services within this cluster and nothing outside so what I'll what I'll do now is and then let's also double check our you know rel our nodejs api deployed in rel if it's working right let's go ahead and do that and as you can see you can see a list of books authors I mean I've just given sample names here but book one book three book two book three and ids of the books so as you can see here both our apis are working so now what we'll do is we'll create a layer seven network between these environments and and try to try to discover these apis in threescape so first in the I'll initialize my scupper router in the aws environment and I'll just wait for it to run now we've initialized red hat service interconnect router there scupper is the open source name and the command line uses uses that and let's create a token that we use to establish the connection here give it a second while it creates the token the token is written and now let's initialize the router in the azure environment the scupper in it will initialize routers in both the environments that will create the connectivity between this the layer seven connectivity between both these environments so initializing the router in the azure environment and if you see here the token that I created if you want to have a look it has the details about the connections and what not that we use so I just wanted you guys to have a look at the token can't go into detail about it so let's create the link now using that token so you can because I've logged in to both the clusters using the same machine and the token is in the same terminal but if you're typically you'll you'll want to transfer the token from one environment to the other and the token transfer is you know the token is the most important part for your connectivity so scupper link creates I'm trying I'm creating the link using the token now from the azure cluster and it's a site configured link to this and then I'm going to expose like like I like like we discussed earlier red hat service interconnect will expose the services explicitly you have to explicitly say red hat service interconnect to expose the services you know red hat service interconnect to expose the services otherwise you know even if the connection is created the service the service won't be exposed over the network so I'm going to expose the services right so now we've exposed the services and let's also create the connection between the aws cluster and our rail machine because that's where our other api resides right so let's go and create that token first again because we already initialize the router we don't need to do it again so let's go ahead and create the token there wait for it to create the token the token is written and let's go check if the token is created good and let's initialize the router in our rel vm okay I know what's just like I mentioned there the demo guards are not with us I guess so let me just let me just do a simple configuration here export scuppers page platform part man and now it should work it's we have to basically tell scuppers should it is it connecting to a kubernetes cluster or is it apartment based router right so I have done that and just waiting for it to initialize cool now that we have the router initialize let's go ahead and create our secret token here let's go ahead and create a file for the token again I'm transferring the token from the aws cluster to the rel vm here I'm copying the token I'm going to paste the token here save the token and now let's create the link and once you've created the link remember we have to explicitly say red hat service interconnect to expose this over the network so that tree scale can discover it I've created this copper exposed command and then we have to create a corresponding virtual service in aws for it to discover right so so what we have done here is I want to show it to you through the pictures so we had three different environments and then we created a layer 7 network connectivity using the three different environments here connected the quad case api node jc api and to the open shift cluster on three scale api management and then when you expose the services it creates virtual services on the open shift cluster where three scale is deployed these are not the actual services but virtual services that you know route back to the actual apis deployed in you know our rel vm and you know our azure cloud and then once you do that if you label the services if you label a service and and give this as true three scale should be able to discover which we will do right now right so let's go ahead and you know label our services I have again have a bunch of commands that will label it first I'm going to label the quarkus api again labeling and annotations are super useful especially if you are you want to you know do it as a part of your github's process right because you know if you want to expose an api directly labeling the api should should help you out so with that I think we are set let's go ahead and check in three scale if you are able to discover our api and as you see in the apis project now you have a payment processor api which was already there on the same cluster but you have a node jace api quarkus api that that are already deployed so let's go ahead and create the node jace api and see if it works and let's go ahead and create the quarkus api and see if that works too so the node jace api that we have here if you see the back end of the api here back end is upstream url it is a local open shift address and not a public address and let's go ahead and create an application plan which is like the plan for your for api consumers to access your apis and let's create an api consumer who will be allocated a key and then let's try to make a call to the api and see if it works create application and let's go to our node jace api integration configuration and then let's see if it's we're able to call it through our gateway now and I think the end point is books right there you go so now we are able to access an api that is deployed outside the the open shift cluster without exposing it publicly and without creating complex vpms through the gateway and our three scale api management platform was able to auto discover these apis and the same case for quarkus api I don't want to repeat the the whole thing but it's going to be the same case for quarkus api I was planning to show a load balancing use case but I am not sure about the time do we have enough time Greg to show one more use case here yes okay okay two minutes okay so yeah I'll I'll I'll go ahead and show one more use case and then close off with a couple of slides right so so another aspect of location less api management using three scale instance you know it's it's not just uh you know discoverability of your apis like I showed here and you go import from open shift you can you know auto discover your apis irrespective of where they are deployed it will also create high away you can also create high availability of your apis using the service interconnect network uh using the service interconnect network and and you can use the same service address so you can we'll do this by creating the direct connection between our aws cluster let's create a token here and I'm just going to run commands quickly here so that you know and I'm going to install the you know the the books api in my azure cluster now all right there's some there's some error that is that that is happening here but I think that's since the token is already created I think I can I can go ahead and expose the service and if I kill if I kill my container on rel my api should still be working that's that is that is the whole concept of this yeah here it is so my api should still be working see I've killed the container in rel deployed it on azure and didn't make any other changes I just added the same application address to it and and everything started the whole high availability and load balancing scenario worked as expected so uh that's that's what I wanted to cover and from a security standpoint I covered these aspects already you know all the connections are secured by mutual tls you know and each network is compartmentalized you can you can only expose the services that you can only expose the services that are that you want to and all the services that are part of the namespace are not by default exposed over the network you can avoid certain security pitfalls and complexities arriving from l3 networking complexity and if you don't need this network at some point all you need to do is run a scupper delete command and the whole network is stored down and that's that makes it so ephemeral that you know if you're if you're deprecating an api you can you can you know easily tail down that network and some of the use cases and applications are you know of locationless api management as one thing is mergers and acquisitions when company a acquires company b they have to have and they want to use a single api management platform but both these companies have apis deployed across different environments that's that's when locationless api management is super useful for cost reduction if you want to optimize cost by strategically deploying apis in regions with lower infrastructure costs without the restrictions that your api management vendor you know provides you with you can locationless api management again is super helpful again if you have legacy apis that that are immovable but you know you can't deploy your api management platform in the same environment as your legacy apis you can really use locationless api management so that brings me to the end of the session i think we have few minutes for questions in the meantime these are a couple of learning resources if you're interested in that ad service interconnect you can scan the qr code and and take you to some learning parts that you can you can try out thank you so much vansi i don't see any questions if you want to leave that screen up just for another maybe 30 seconds so the audience can pull out their phones and pull up their cameras and do their thing with the qr code there's some great resources there i would encourage you to you know take a minute to open a couple browsers and refer back to them later and if you have any questions you can reach out to us we will be sharing the recordings of these sessions here in the next couple weeks there's a lot of sessions so we've got to kind of organize it and we'll we'll get a get a link out and as i said in the beginning there will be a link that will be uh or these will be posted up in our developers youtube channel in the next couple of weeks so have a great rest of your day and look forward to the next session we're not done with the day so hang tight and we will be back shortly with the next session thanks great thank you everyone thank you vansi