 We're very excited to have you here today. This workshop today is about Red Hat OpenShift API Management, which is our API Management Service. So I'm here, your presenter. My name is Jennifer Vargas. I'm a Principal for Marketing at Red Hat. And with me today here is Jaya Paskaran, who's our Technical Marketing Manager. And she's going to walk us through an amazing, guided workshop today, all hands on. So let's get started. So to get started, I just kind of wanted to do a quick product introduction so you know where we are, what the product is. Talk a little bit about the workshop details and share the links on the screen. I already shared with you the links on the comments tab. So go there, check it out, copy the links. There's two of them. And let's get started. So Red Hat OpenShift API Management, it's part of a larger initiative within Red Hat, which is our Red Hat Cloud Services. The Red Hat Cloud Services is a set of new cloud services that we are bringing to the market. And it consists on three types of services, the platform services, application, and data services. All of them are natively integrated with each other. And the idea of these services is to make sure that we're providing you a unified platform to build cloud-native applications. And OpenShift API Management, of course, is a key part of this, as it can help you today on managing your APIs and making sure that when you create applications that are API-driven, you know where you're saving all your configuration, you know where you're managing your policies in terms of who gets access to those APIs. So as I just mentioned, Red Hat OpenShift API Management is a fully managed service for API management, of course, sorry, this sounds like a tongue twister there. But it's delivered as an add-on to manage OpenShift. So if you have an OpenShift dedicated cluster, or you happen to have Red Hat OpenShift in top of AWS, which are our two offerings on AWS, you could get API management running on it today. Actually, we do have a free tier. So if you're an existing OpenShift dedicated customer, you could actually get this running almost in a few hours. So what we have on API Management is a couple, the components are here on the left side of the screen. So the first one is the API Manager that allows you to see all your policies and it allows you to define them and check out all your APIs. On the second one is the API Gateway, which basically is the one that enforces those policies and enforces the API governance. And then finally, you have the single sign-on capability. And what that does is it helps you out to secure your APIs. And all of this is delivered on top of OpenShift and we provide you with 24.7 premium support and a 99.95% SLA. And one of the things that we have done with these products to make sure that since it's natively integrated with OpenShift, it provides you with a streamlined developer experience that you can interact very easily. So it is a great platform for allowing you to have an open, modern approach to building API-first applications. So I mentioned this already. We do have a free tier available. So just keep an eye out there. If you're an existing customer and you want to try this for longer periods of times, we do have a free tier of 100,000 API calls per day, which basically allows you to have a small deployment or even just a development environment for you to keep playing with this solution. So what do we have for today? I'll leave this for the end if we have a chance. For today, one of the things that we want you to do is to instruct you on how to use OpenShift API management. And the way we're going to do that is using something that's called the OpenShift API Management Sandbox. So the guidelines for how to get to those pages are on those documents that I just shared. So the goals for the session today is first you're going to provision your dedicated sandbox environment and what that's going to give you is access to an OpenShift dedicated cluster. And within that cluster, you're going to be able to access the OpenShift API Management UI. And within the OpenShift API Management UI, you will be able or well with the OpenShift cluster, you are going to be able to deploy a containerized application using some of those OpenShift console capabilities. And once you have that application in there using API management, you're going to be able to secure and manage your API. The team or Jaja is going to make sure that you find all those quick, nice ways to understand how to use the product. So what do you need today? It's very simple. So you need to create a Red Hat account and have your credentials handy. This doesn't take very long. And you'll get access to the sandbox. There are a couple of sandbox within our developers redhat.com page. Today we are going to put all our efforts and our attention towards the OpenShift API Management Sandbox. Go ahead request that sandbox and we will walk you through the step-by-step instructions for today. Links, very important. Here's the link. If you didn't get it on the chat, you can write this one. It's very easy red.ht slash api-workshop. That will get you to the guide in the guide. You have a little bit more information about API management, a couple of useful links. And it will take you to our scholars page. The scholar page will have all the instructions. Very well easy to follow, numbered, and you will get everything that you get in there. You can actually do this by yourself. Probably you could pay attention to us today and then maybe come back tomorrow and do it from scratch. The guides are going to stay live. The developer sandbox is available for you all the time. You get access for 30 days to the sandbox so you can come back whenever you want to try it again. And once that sandbox expires, you can just request another one. So you can have a lot of fun with API management if that's your choice. So let's get started. So I'm going to pass it to Dalia. At this point, I think we are good on the presentation side. Let me stop sharing here. Hey, Dalia. Thank you. Thank you, Jennifer. Good luck on your presentation. I hope everything is good. And if you want me to be here, I can come back at any time. We can talk more about the sandbox or any questions that our wonderful participants have. Hello, everybody. Thank you so much for joining us today. I'm just going to leave the link again once again for you to be able to access that. What you would be able to see is this page. It's a Word document, it's a Google document. You would be able to see all of those instructions here. And the first step is to create a sandbox. Now, what is a sandbox? A sandbox is a sandbox, like children play, typically. So there's an environment for you to play and there's the developer sandbox, specifically for OpenShift API management. You can manage and secure your APIs. You can try out, like Jennifer said, you can use this environment to try this out for 30 days. And then you can again register for another one, very soon, again, if you want to do this again. All right. So I hope some of you have already created some of the user ID. If you have not, we will go ahead and do this. I request you to go down to the last step. It's workshop, step by step. And there's a link right over there. Please click on that. And this should take you to the Red Hat OpenShift API management workshop. I hope you're all there. If somebody could say a plus one in the chat, then I would know that you're able to access this page. I'll pause for like 10 seconds. Okay. I'm hoping you're able to see this. And this is just an introduction in terms of what this workshop would, what you will typically do in this workshop. And clicking on the setup, the developer sandbox. And in this case, please work, please sit down. Let us do this together in case you haven't done this. Click on the Red Hat OpenShift API management user activities page. And this should open this up in a separate tab. And click on launch your developer sandbox for Red Hat OpenShift API management. Like Jennifer said, this is definitely a tongue twister. If you have a user ID, please go ahead and register. Login already. If you don't have, click on register for a Red Hat account. Create a user. As you can see, I have done this multiple times. So let us assume definition. I hope you are with me. You can do the exact same thing if you don't have a user name already. This is the regular terms and conditions. You're welcome to have a look at this or just accept this because it's a regular terms and conditions that you would have seen again and again across Red Hat. Click on create my account. You would be taken to another page where we ask to launch it. We come back to the same page while the API management is being provisioned in the background. It may kind of take a few seconds and then your account should be ready. Now start using your developer sandbox. Go ahead and click on that. You'll be asked to provide a few more additional details. I hope you have your email address handy because your email is handy because you will have to approve this portion of that. Now this step might ask you to provide your phone number just to validate using an OTP. Rest assured, we are not going to save your phone number anywhere. It is purely to protect our environments from mining. So because I have user Red Hat email address, you would not seek that step, but if you are using, of course, most of you might be using your own email address, right? So you may have to go through that step. So with this, I finished my setup. I'll be receiving an email. Once you've reached this step, you will be receiving an email, which would ask you to verify your email address. And once we do that, we will be able to see the Red Hat OpenShift dedicated page, which is basically the sandbox. Now, the Red Hat OpenShift API management is the sandbox is deployed as a tenant over here. And for the OpenShift API management to be able to access the services which will be deployed on the OpenShift dedicated itself, we will have to set up certain network policies. Now, typically, this would be automated, but at this instance, you will have to create one network policy if this is already not being created. Yeah, so go ahead and pick the dev project right from the top button, and you should be able to see a page like this. The network policies page, please ensure that you have picked the dev project. So this would be your username hyphen dev. Like every other sandbox, you would have a dev as well as a stage, two projects automatically created for yourself, two namespaces automatically created. Please use the dev namespace. Now go ahead and click on the create network policy. And we will directly go and change the YAML. So once you're in this page, click on edit YAML. And within the edit YAML page, you would be able to see this section where we have the content, this section of this, right? So please copy this portion of this. And this is basically, if you're, I mean, just very quickly showing this page again, under setup, under section one, please scroll down a little bit. And you would be able to see the YAML section that you will have to copy. So this is kind of a temporary arrangement. This is being automated by the engineering team. So this should roll out very soon, but just to ensure that we're able to progress with the workshop, this is like a workaround at this point in time. Then this enhancement is getting coded as I speak. So copy this and then go over to the YAML file and everything from spec converse, right? Just replace the whole thing. And the alignment sheet should be fine. I mean, it should automatically be a pre-aligned. And click on create. And you have created the network policy. Basically, if you see this, what we are saying is that we are allowing access from an ingress perspective for the ROM sandbox. ROM is that I have OpenShift API management. It's like an acronym. So this network policy allows the OpenShift API management to speak to OpenShift dedicated, which is a sandbox that you see over here. Now, once this is set up, we can go ahead and create the tenant. So switch over to the developer perspective. Now, we have two perspectives. If you are not familiar with OpenShift, you have an administrator perspective and you have a developer perspective. Switch over to the developer perspective and we'll go ahead and add a container image, an existing image. And then click on the container images. What we're trying to do is that we are trying to we'll deploy a pre-existing OpenShift, pre-existing quad-price image over here, which is based on APIs, which would be the API that we would look at managing and securing. So choose this external registry or pick up this API. It is all the instructions are here. If we click on the deployer as API, you would see that the image name here. So click on the image name and choose that over here and just for one sake, let's pick up the right icon, pick up quad-price as the runtime icon because it's a quad-price image. And let us go ahead. You don't have to change anything else here. Click on create. Now, what this does is picks up, downloads the container image and deploys the application for you. It should take just like a few seconds for the application itself to get deployed. When you see the dark blue ring over here, which means that this particular image has been deployed and clicking on this icon, external link icon, you would be able to open up and see the image. That's the welcome page of quad-price. Now, this open, this quad-price application has been annotated of to already have to be able to expose the open API specs. And let us go ahead and see how that looks. And we'll have to ensure that we choose HTTP instead of HTTPS here. And you would be able to see the API, open API specs version 3.3. Okay. So we know that this application has been deployed. The open API spec is working as it should. And let us go ahead and do the next step of what we call service discovery. Now, service discovery is a feature that helps you to import services from an OpenShift environment, OpenShift cluster, on to that had OpenShift API management. And this is automatically possible when the OpenShift API management is in the same cluster as where you have the service deployed. In this exercise, we will look into that option. You would have other resources and learning materials where you can deploy APIs or manage APIs, which are not in the same cluster. But in this case, the OpenShift API management tenant is also going to be in the same cluster. Now, I have deployed the application, but we also have to I have skipped a step here, my apologies, of actually creating the tenant itself. So click on search and if you click on the resources, you will be able to see the API management tenant. When I click on API management tenant, it asks you to create the API management tenant. Go ahead and click on create button right at the bottom. And this YAML file, we just go wait for a few seconds for the YAML file to, sorry, for the tenant to be completely deployed. So when the tenant gets deployed for all of the stuff, the magic which has to happen in the background, let us go ahead and annotate the service that we have. What do we mean by annotation is that, like I said, for service discovery, the feature which helps you to import the APIs easily from an OpenShift cluster into the OpenShift API management system. What we will have to do is that we have to tell this API that you have to set it up for auto discovery. So click on the services, right? This is the services we are going to annotate the service. Click on Rome Quarkus Open API services and here is where the annotation happens. Let us go and have a look at how this annotation happens. So the first one is that we will have to say that it shouldn't be discoverable on the answers, yes. So this is the label that you will have to add to the service. In addition to it, we also add a few other annotations. The one is what is going to be the API path, the description path and what is a port and what is a scheme, right? We have to look at these three things. So let us go ahead and annotate the service. The first section is a copy over on this particular label and let us add the label to this particular Open API service. All right, let's stop checking, click on save. The OpenShift, the discovery label is set. I will also have to add the additional annotations. The first annotation is the Open API path. The next one is the port. The port is 8080. The last one is the scheme. That's a space. So you should have four annotations here and the label that we had created. So with this, we have set up the service discovery aspects of the Quarkus API and you are able to now discover this from the Red Hat OpenShift API management system. Let us go back and check. The way we can check easily is to click on the project, the left-hand side, and you can see the API management launcher link over here. So we go ahead and click on the OpenShift API management launcher. That will open up the OpenShift API management, the ten-inch that we had installed. OpenShift API management is based on three-skill API management and that is why you see that icon over here. Now, you will have to authenticate yourself using the Red Hat single sign-on. This is important. Please click on this link right at the bottom. Authenticate through Red Hat single sign-on. All of these instructions is over here with enough pictures so you can either follow along by spinning up a workshop or just listening at this point in time and try this out later. Then you go ahead and log in with Dev Sandbox. Now, this will take you to the dashboard of OpenShift API management. Now, what we will have to do is create a product. This is how we create a product. A set of APIs, a set of backends can be created together to form one single product. Let us go ahead and create the API, the product tagging. Click on Create Product. I'm hoping all of the magic has already happened with the service discovery and we should be able to import it from OpenShift. Now, this is again, we have to authenticate this to enable this option. So please click on that authenticate link and log in with Sandbox, Dev Sandbox again, to authenticate that had OpenShift API management. Go ahead and click on Import from OpenShift and you can see that there's one single product. We had this one product called Rome Quarkus Open API and then within the namespace Dev. So it automatically discovered those APIs and because there's only one of them, only one product over here, because we had just one API, one service deployed, right? That's what appears over here and we click on Create Product. Now, once you click on Create Product, a few bunch of things happen in the background. The product itself gets created. Like I said, an API in OpenShift API management is referred to as a product and each product can have more than one backend, one or more backends. And what is a backend? A backend is typically a web service on that OpenShift API management with proxy requests based on certain rules that you define against each product. So the product has been created. Let us have a look at how this product looks like. It's got the name, the system name and there is a backend which has already been created. Now, if you have a look at this link to that backend, let us very quickly have a look at this backend. It's got a private base URL. Now, if I go back to where I had deployed the application on OpenShift itself, if I click on the icon, I would be able to see the service. Now, the service, under service routing, you would be able to see the host name. And this is the same host name that you would be able to see on this page, right? It is the same service that is being used as a private base URL. This works because you are in the same cluster, but if you are in a different cluster, you would be using a different URL, which would be like, you know, internet-facing, the route, perhaps, not necessarily the service URL. But in this case, like I said, it was auto-discovered and so the backend got created. Now, let us go back to the products. We go to the products menu. You'll be able to see a listing of all the products, which are available here. I click on the product that we created just now. You would be able to see the active docs. And I'm not able to see the active docs. So, here it is. Give me a second. I'm just going to double-check if my network policy is working as it should. Just give me a second. And this is basically just to show you that there's a real demo, not a recorded demo, that we are having such a few challenges. I'm just going to double-check if the network policy that we were supposed to set up is working as it should. I'm just going to try to create this network policy one on third time and say, allow me to go back and ensure my policy is correct. And then I'm just going to check if my APN tenant is functional as it should. Okay. And I'm just going to try to import this once more. I'm just redoing the whole thing again. Now, hoping the new network policy kicks in. And oops, I have to go ahead and set up all of those services. This is like a revision for all of us. Now, to ensure that we know, understand, remember everything that we have to do thoroughly. So, I'm just going to go and annotate the service once more, the service is annotated. And add the rest of the stuff right. All of the annotations really go ahead and do the annotations once more. Nothing has crossed that it's been annotated. Now, I should be able to go and discover it once again. Now, like I said, we have a new service with a different name type. So, you'll be able to see that. Let us go over here. Let's check if the active docs are appearing. Yes, I think I had made a mistake and now the way I had created the network policies are my apologies. All right. So, the active docs has been created. If you click on the active docs, we open this up. This was the same spec that you had already seen. I had opened up the active docs. Let's give it a few seconds. Yeah. And you'll be able to see the active docs all of those, there's an endpoint called fruits that you are able to see. And let me go back in to the service. If I click on this, the URL, the one I had deployed, it basically shows you a JSON file of winter fruit and a tropical fruit, an apple and a pineapple. So, these are the various aspects, these are the various methods which are available in this particular generated API. Now, the next step is we will have to, now you may ask me now, fine, you have imported what next. What we will have to do now is to go ahead and secure this API. Click on integration and configuration and what you would be able to see here is the staging APCAST. Okay. So, what is an APCAST? Open ship API management routes, all of the API requests through proxies, okay, known as APCAST instances. An APCAST instance is an NJNX based API gateway which helps you to integrate your internal and external API services with OpenShift API management. It also helps you to enforce your security policies and rules and now we will see how we can actually go ahead and set this up. Now, if you can see over here, right, we have a call command saying that there's an example called for testing, but the user key is just like a temporary place hold on. So, if you look at the instructions right at the bottom of the page, it says that you will have to start by creating an API and application plan. Now, you would be hearing a lot of such terms, right? We spoke about product, we spoke about backend, then we spoke about APCAST. Now, we also have another term called application plan. In OpenShift API management, an application plan is used to define how the APIs should be used, what kind of usage rules, what kind of limits should be applied on the API. So, when a developer signs up for your API, they can select a given plan. So, you may have different plans, you may have free tier, you may have paid plans, you may have gold tier or platinum tier, enterprise tier, multiple tiers, right? So, that is where you would be creating different application plans and the developers would sign up for a particular application plan, depending upon their need. Click on start by creating an application plan and let us create a new application plan. Now, we are under the same service, OpenShift API 1, and I'm just going to give certain names over here and click on create application plan. The plan is created now and once the plan is created, you will have to publish the plan. Till then it is in a hidden state, okay? So, I have published the plan. Now that I have published the plan, I will have to go ahead and allow a developer to access these plans, right? And that is called an application. So, you will need to create an application which is associated to a developer. Now, all the developers or accounts are under the audience section. If you click on the audience section, you would be able to see a list of accounts which are already there. You can create your own account or for demo purposes, we can go ahead and use the same developer account which is automatically pre-created for us. And click on the developer account and as part of OpenShift API management, if you remember right on the front page, there was one application which is already live, but we wanted to, we would like the developer to access the Open API, the Quarkus API that we had created. So, we'll create the new application and as part of creating the new application, we'll have to pick the right product. Now, there's the API one is the right product and what is the plan? There's the plan that we had created and we just pick our name and the description. I'm sure in real life we wouldn't be calling it this. We may be calling it a better name, but let's just go with what we have here. Create the application. Now, when you create an application, you would be able to see the user key and of course you can go ahead and reach under the user key and whatnot. So now, this particular user will use this user key to be able to access the API that we had deployed. Let us go back to the product and under integration, if you go back to configuration. Okay, and if you remember earlier, we had the user key deployed as user underscore key in caps. Now, this is an example call. Okay, so if we have multiple users who are using the same API account, you would just see one user key over here because this is a sample, this is not like the final thing. So, this is how typically one of them would use. A developer would access that. In this case, this is passed as a parameter. There are, I'll show you how you can, you could change this parameter to be a part of the header, perhaps, and access the API. Now, let me go ahead and try to access this without the user key. Okay, so you'll be able to see that. You know, it says authentication parameter is missing. And now I'll have to choose, let me use with the user key. And now you can see that application is accessible. Now, let us call the method fruits. And you can see that this particular URL has been, you know, it is accessible at this point in time. I'm just going to, I'll hit refresh a few times. Please go ahead and try doing that a few times so that because then we can have a look at the traffic pattern as well. In the meantime, what I would also like you to do is if you go back to the open Chef drive, the Sandbox where we had deployed the service, you can see the route. So this route that you see over here is also accessible. So if I click on fruits and use the HTTP instead of HTTPS, you are still able to access this. So it is not, I mean, if somebody gets a hold of this URL, they would still be able to access this. So if you could just go ahead and delete this route, okay, click on the route. And right over here on the actions, you will see a delete route. I'm just going ahead and deleting that route. When I go back to my topology, you will see the first API that I had created has got this open URL one, but this one does not have an open URL. It only has the service. So the only way you're able to access this API is using a service, which is the hosting that we saw already, right, says service cluster local. This is the only way you can access this. But as a developer, you'll be able to access this. So I'm just going to refresh this page again just to show you that you are still able to access it through the API. So what I'm saying is that from the APCAS, APCAS URL here, copy-paste, copy-use, use that URL, right? And you will be able to access that only through OpenShift API management and with the usage. Okay, now that we have accessed that particular page a number of times, let us see how the analytics is going to look like. You can click on the analytics page and analytics click on traffic. This is the quick view of it. You can explore this further, right? So it just tells me that I have tried to access this multiple times, 17 hits. So hits is this particular, hits is this, the hits traffic analytics is preset for you. So you'll be able to see the access of that. And for example, I had tried to access without the URL, right? So it tells me that there's no integration errors. But for example, if I provide all wrong user key, perhaps, right, authentication failed, then it would tell me that user key was invalid. And then it also tells you what are the kind of response codes that has been made available. That's a positive response code. And what does the daily average and so on and so forth. You'll be able to see primarily based on the hits, right? You can set up other methods and metrics later, but out of the box you have the hits metrics available. So that kind of brings us to the end of deploying your first API on OpenShift with a little bit of excitement where it wasn't working. And then we were able to import the OpenShift API, the service. We were able to deploy the Quarkus service on OpenShift. Then we were able to import that Quarkus API on the OpenShift using self-service discovery. And then we were able to set up the application plan and the application to ensure the user key is generated and we were able to access in a secure fashion. So we were able to deploy it on the sandbox and manage and secure your API. Now, OpenShift, as part of the OpenShift API management, we also have single sign-on, which is not part of the sandbox itself, but you could be able to use the sandbox. You can set up the sandbox, use the SSO, or it has single sign-on in the sandbox also. But before I show that to you quickly, I would also like to show you certain settings that are available over here. For example, on this point in time, this API cache is managed. And this is what I was talking about. So in this case, if you see the application API key, user key is identified and authenticated by a single string, as in how we saw it like a parameter. You could also use it like the HTTP header. If you choose HTTP header, you'll be able to... And for example, you want to call this as API key. Like you can just change a few of those settings. Let me go ahead and update the product. So any of the settings changes that we do will flag this up saying that there has been a configuration change and you will have to promote the new change to staging. So in this case, I have to promote the version 2 and then now if you see this, now it tells me the curl is this way. So the user key is no more user key. It is now called API key. And you can go ahead and run the curl command. You'd be able to see the same thing like what we saw earlier. Excuse me. All right. So I'm just going to pause and see if there are any questions. Thanks, Jonathan. I think I saw your comment a little too late. Perhaps take a second and see some of the settings correctly. Now let us just go back to the OpenShift dedicated place. So you would have two services, two projects that I spoke about. You have the Dev project where we set up the tenant and then we are imported or deploy the Quarkus API. And then we also have the stage environment. There are no services at this point in time. But like I said, you can use OpenShift. So I did have single sign on a container image over here. So you could try this out to click on developer catalog. I may click on SSO. You can choose anyone of the SSOs over here if you want to try out how to integrate SSO with OpenShift API management. Now the instructions for that will be available to you very soon. We will link the new instructions back to the same instructions that you have here. We will add additional links to further instructions as well. So you can, if you're looking to use single sign on as well as part of your API management, you can try this out as well. Just click on it in standard shape template and just go ahead and create that. That would create an SSO for you. It just takes a few minutes. So I would not wait for just to complete that. But that would be a workshop by itself. So that's what we have today. Are there any questions? Okay, I think so JWT is used when integrating with SSO. Yeah, OpenID Connect Auth is also available. Like I said, this sandbox is available to you for 30 days. But you're welcome to go ahead and spin up another environment. Again, another sandbox environment when the 30 day lapses. I'll look out for additional learning paths that we are deploying stuff like OpenID Connect. How do you do developers, developer environments? How do we do developer portal and so on and so forth? So we are definitely looking at launching all of those instructions as well. You should be able to see all of that soon. So here's where the developer portal is. And this is the default developer portal which comes up at this point in time, by default, obviously. And you will have instructions to how you can actually go ahead and set up new developer environments. Or how do you hit it this out for your own? So let us, if you look at this one to dummy user that has been set up for you. So this developer portal is available out of the box for you at this point in, which you can go ahead and hit it this out. So instructions for that would be made available to you later. And that's it for me, Jennifer. That's nothing else. I can stop sharing. Awesome, Jaja. Thank you, Cover, Bernard and Evan. Help, they are the other SMEs from our. We're excited that you guys were here. And it looks like to launch the guide and follow the instructions. This concludes our session for today. Thank you very much for joining us. Remember, just go back home and keep exploring the development is on there. That menu that has struck you to many other of our shift data science, like red hat open shift trains for Apache Kafka, which is our Kafka service there. And you can do them today. To stand up this environment as you just. Jaja, you did a wonderful job. Thank you, everyone.