 Two quick words about myself, my name is Hans-Peter Grazl, I'm based in Graz, Austria. I joined Red Hat about a year ago as a developer advocate. I'm trying next to my day job to be an active open source community member and I have done a bunch of contributions mostly in the ecosystems of Apache Kafka and MongoDB in recent years. But enough about me, let's get this going. Let's start this session and do something that developers do. Let's build an application. At least let's do a quick thought experiment how that would or could look like. When we are building apps, we grab our editors, our code editors, we would decide for a language and framework of our choice and then we would write the code, period. Nice and easy. Well, it's a bit hard to see you, but I can still imagine some question marks above some of your heads in the audience wondering, seriously dude, that can't be all, right? It's not about just writing the code. And of course, the younger ones here might have a hard time to imagine that, but yes, there was a time back in the days where we essentially did just that. We focused on writing on our code primarily as developers and that was it. But fast forward to 2023 and there's of course a plethora of more things that we need to consider when we have cloud native development in mind. We need to do and care about a lot more things. So we need to find a way to create proper container images. We need to decide for a container registry to share those images. Eventually hopefully we want to run containers based on that images in some place. Maybe it's OpenShift or some other flavor of Kubernetes. And so this means we need to think about deployment manifests, YAML, lots of YAML, which has nothing directly to do with the application code we write. And that's not all right, we need to add CI, CD capabilities to it. And then we need a place to store all of that in some repository or maybe distribute it across multiple repositories. And so it won't be long until all of doing all of those activities, and I was barely scratching the surface here, feels a little bit overwhelming. So something like where you even might get frustrated at times when doing that. And I think it's enough to just take a quick glimpse on the CNCF landscape and how it evolved or in fact how it exploded. What you see on the images here is the right image versus the left image shows a period of just three years, namely between 2017 and 2020. So if you would look at the CNCF landscape today three years later, it would even be more packed. But the actual problem, I think, is slightly different. It's not just the sheer amount of technologies out there. It's tech proliferation that happens in companies, uncontrolled tech growth, that is without making sure that you find some way to streamline all of this. This will definitely lead to some kind of lengthy and inconsistent onboarding, especially for new developers that are joining your companies. And this, again, is for the company itself from a business perspective quite costly at the same time it frustrates developers a lot. And so basically this is a loose, loose situation nobody wants to have. So the question here is, is this shifting stuff towards the left attitude still a good thing going forward? I mean, the original idea, when you think about this, you build it, you run it, it was okay. But the problem is that over the years more and more critical activities and tasks have been creeping into the daily lives of developers. And again, this means some burden, and this is overwhelming for many of them. And so let's face it, right? We are simply asking, I think, way too much from developers these days. And of course, if you just take a quick look at this example here, yes, it's probably a little exaggerated, but still you can relate to that somehow probably. And the question is, I think not even in our wildest dreams should we wish for all of that knowledge, all of those skills, coming together in just one single role or let alone one person, right? It's insane. So I think it's time to start to rotate our perspective and stop pushing ever more things to the left and shifting stuff towards the developers themselves. Instead, we should try to think about how we can shift things down. Now, what I mean by that is we should try to establish groups of people within our companies, usually they are called then the platform team. And those members of this team, the platform engineers, if you want to call them like that, their primary task is to pave the way for other dev teams in the companies so that they can be more productive and more efficient in using the actual technology or maybe even multiple different tech stacks that matter within their companies right now. And this notion of shifting things down actually directly brings me to internal developer platforms or IDPs. So the primary objective of an IDP is to enable so-called developer self-service workflows. The IDP itself, so that the internal developer platform is an artifact. It's... You can consider it an internal product that is built by the platform team and the other dev teams are consuming it. They are the customers of this IDP. And again, the idea is that the platform team comes up with a way that gives some guided and supported path for all the other teams to successfully deliver software. One core ingredient here is the concept of a so-called golden path. Golden path templates. There's similar terminology around that idea. Sometimes it's called paved road. The bottom line of this is that a golden path is something like on a very high level a pre-architected and supported way for dev teams to build and deploy particular pieces of software. That's very generic, I know, and so some of you might wonder what is actually contained in such a golden path. And so I think there is no single or simple answer because it depends a lot on what it is that you're actually going to build. Which type of software? What do you want to create? But nevertheless, there are some commonalities in such golden path templates. And I tried to highlight four of those that you really find very, very often when we talk about golden path templates for cloud-native development today. It all starts so the fundamental underpinning is something like a templated repository. And this templated repository in the end will result in an automatically scaffolded initial code base. So that initial code base from an application perspective can be as simple as a Hello World app in a specific technology stack. Can be an empty starter kit, yet it contains everything from a configuration perspective that any developer needs to get quickly started with a particular project using a specific text stack. The second key element in a golden path is a set of deployment manifests so that we can provision the application and also the necessary resources that we might need in a cluster setup in order to run the application. And again, there is no single way of doing that. Maybe you want to work with Helm charts in that regard. That's perfectly fine. Maybe you say no. We want to use customize and use overlay-specific configurations. That's also perfectly fine. The third key ingredient here for a golden path is an automatically configured CI-CD pipeline. Yes, ideally it should be customizable if some dev teams have their specific needs, but you still should have some kind of common ground how you approach CI-CD in your company. And finally, a very important topic, observability, again, a foundational capability, I guess, for cloud-native development these days. We should back that into such a golden path template as well with at least some reasonable default settings regarding observability that forms something like the baseline for a new project. The most important thing here is that you keep in mind everything is technology-agnostic. You decide, you, your platform team, together with the dev teams, they decide how they want to do that and how such golden path templates look like. Again, if you prefer to do your GitOps with Argo-CD, that's perfect. If, however you say, we want to use flux, well, let it be flux. I guess you get the idea. You decide, basically, like with an other card menu, how you want to actually build and which types of technologies you want to use. Very quickly, the main benefits that we get from that is that different complex text-texts become way more approachable, also easier to maintain in a consistent and systematic way. That basically just two different perspectives, it helps new developers become more productive in much shorter periods of time when they can use the IDP together with these golden paths, but also like seasoned, existing engineers that know quite well what they are doing are very productive. They will have considerably less cognitive load when they need to switch teams internally, and these new teams might work with a different text-text than what they are used to. Also, our industry is moving at a faster pace every year, basically, and so that they will certainly come where you need to evolve your text-text that you build upon. Today, it might look like different in one or two years, and that's okay. And again, IDPs and golden paths pay dividends because they help you to evolve your text-text again in a streamlined and systematic way. It's helpful and removes friction not only for developers, but also basically other essential project-related roles. Let me switch directly and introduce you to some concrete technology now that we can use to build such things, and this is Red Hat Developer Hub. Again, we have quickly seen it in the keynote today in the morning here on this stage, and I want to explain it to you with the different example scenarios that we are going to deploy. So first, what is Red Hat Developer Hub? It's an open developer platform built on the upstream open-source project called Backstage, and you can use it in enterprise context. This is what it is designed for to build your own internal developer portals. I just want to highlight three of its major capabilities here because we don't have the time in such a short session. I also want to show you a demo, but still the key ingredient is first at the center, at the hard lies a software catalog. That software catalog is supposed to store everything, literally everything that matters for doing your software projects. It stores links to repositories, it stores, it registers components, it knows how these components relate together, it knows ownership information, it can be your single pane of glass for everything, including technical documentation and whatnot. The second important thing are those golden paths, and those golden paths, they are encoded as templates. We are going to look at one of these very soon, and the templates themselves are registered in the catalog, which means if a new dev team tries to get started on a project, they go to the central software catalog, they look if there is a template available for the application and the tech stack they want to use, and they start and take it from there. And the third important aspect are a set of plugins that allow us to integrate with various different technologies and tools, basically across the whole application and software lifecycle. Some include, for instance, a Tecton plugin to do CI, or an Argo CD plugin for doing GitOps, a Key Cloud plugin for authorization and authentication and things like that, some of which we are going to see in the demo. Talking about the demo, I want to show you now Rated Developer Hub in action, and we are going to use such golden paths to deploy a simple demo scenario. It's an end-to-end demo scenario, and it consists of three applications. The first application, the front end, will be an Angular application, a single-page application. It will show different points of interest on a world map. The Angular app talks to the API of a gateway app, which acts as an intermediary towards the actual backend application that is supposed to serve the data points that the Angular app wants to show on that map. And again, those two backend apps, they are written with Java and Quarkus in this case, but these, again, are just technology examples. So let me switch directly to the Red Hat Developer Hub right here, and here I'm in the software catalog. I think that should hopefully be big enough to see also from the back. And this, we are now filtering for templates. You see that right here? And so we have those three templates, one reflecting each of the three components that I just briefly mentioned. So all templates are, in this case, very similar in structure. They are more or less doing the whole CICD thing in a similar fashion, but they are, of course, for different technologies. So one basically for front-end and Angular and the others for doing the backend stuff. So let me start by applying the first template. And we're gonna start with the front-end here, and just providing some parameters that you can define freely in your template, what your users, in this case, the developers should be able to customize. And it's very little to do in this case. It's a simple template. It does a lot of assumptions in the background, but everything you can imagine can be configured using these form results. It's a pre-filled cluster. I'm just going to choose a namespace or call it like this that will be used for my cluster. I will use my user as the owner, and we provide some image coordinates for the container image. Then we get some overview and I create that. And what happens now is that a number of activities are sequentially processed. The first thing is the templating engine takes the app skeleton for this Angular app, and usually it's just a skeleton. Here it's a full application that we have something to look at afterwards, and it replaces parameters if needed. It does some templating to it. Then it publishes this source code repository to the GitHub work that I could choose in the wizard. It can be GitLab, can be any other version control system that has a plug-in for backstage right now. Then we are doing the same thing, basically, but instead of the app source, it's scaffolding the deployment manifest, meaning we have, in this case, Helm charts in the background to do both things, to deploy everything that our CI pipeline needs, based on Tecton. And we also use Helm charts to deploy the actual application itself. And you see also some Argo CD-related output here in the logs of the templating engine, which means that also we are scaffolding Argo CD-related manifests in that regard. Then we publish that second repository, the GitOps-related repository. And finally, we register the source component, basically reflecting the application itself, in this case the Angular app, into the software catalog of backstage. Finally, we instruct Argo CD to take action based on the GitOps repository that was just created as well. We're going to look at now what this component looks like in the catalog, but before taking a closer look, let me just do the same thing with applying the other two templates, because then we don't have to wait later for the CI CD to finish off the other two components of the whole application. So I just do quickly the same thing now for the backend, for the gateway and the backend app. It's the same thing. What did I choose? Good question. Let me verify, because it should be the same namespace, of course. I forgot what I chose, what I called it. What did I call this namespace? Does anyone remember? What did I use here? Happy, but live. This is my namespace that I just used, and I should, of course, in this case, use the same namespace here again. And then I go here, do the same thing, and I will do one more for the, again, the scaffolding and the templating and everything similar to what we have seen before now for a backend microservice, if you want. And finally, the third one, doing it as well, so that at the end we have the full application working. Quarkus backend, some description, that's fine. Again, we are the live, this is it. And then I will walk you through the, go back to the front-end component that we deployed at the beginning to understand how beneficial it is to have the software catalog also with all the registered components there. So let me go to the catalog here, and we see these three components that we just deployed. The app itself, I will open it right away. You see a links section here. We can, you can provide any regenerated links when you're registering your component to the catalog, and this directly opens. And if the CI CD, let me see, is finished, yes, the front-end was, the container image was built. And also, if we look down at Argo CD, it's synced and healthy. So yes, it should be up and running. And opening that link indeed shows me a points of interest map. The map right now is still empty, and that's okay because we don't have a backend at that time. So once the gateway will come up, we see at least two data points, a hard-coded into the gateway itself that will show up here as an additional map layer. We are not seeing that right now because let's check the other two components quickly. They are still doing the CI part. This is still creating the container image and pushing the image to the internal container registry on the OpenShift cluster behind the scenes. And once that is done, we will see some data points appearing there as well. But let me go to some of the interesting things here. We of course have, like I said, couple of links here that help us with working with this particular component. We have links to the respective source code repositories. In this case, the GitHub repo that was just scaffolded two minutes ago. We can get insights into the topology so we can see that I have successfully deployed deployment on Kubernetes containing just one port which is up and running. So basically, if you have seen OpenShift before, this should look familiar. You can get these graphical insights into your deployment and whether they are healthy and stuff like that. And then we would have, of course, also like the integration with version control, meaning we have another two tabs here. In this case, we are using the GitHub integration showing us issues or showing us the status of pull and merge requests. There is of course nothing there yet because it's a freshly scaffolded code base. We also have a non-graphical way to explore what our Kubernetes related resources are looking like for this particular component, everything right from within Red Hat Developer Hub, which means as a developer, you don't even need to go into the actual Kubernetes cluster and check those things. You can do that, but you don't have to. You get the most important insights right here from the UI. Then another cool thing, we can inspect what APIs are provided by which component that are registered in the catalog. And here, remember we have seen the Gateway app being this intermediary between the Angular frontend and the actual backhand. And so it means the Gateway provides an API for the frontend but also the Gateway consumes an API from the backhand, which is what we are seeing right here. The whole system is also something that we can easily understand, which is go to the system view and we see the diagram here and we see how all of the parts, the components, the applications that is are working together in this single logical kind of system that is called workshop system and my namespace is a suffix here. We have the three components here. We see which component consumes and provides which of the APIs. And so it's all there, right? Of course, talking about APIs, we can do one more thing. We can be here and also go of course to the APIs back. So this brings the open APIs back for this Gateway API here, directly linked from the actual source code repository behind the scenes. We can also directly interact with the Swagger UI and we could play around with that API right here. Everything, again, we can find and explore directly without going out of the Red Hat developer hub. So we can see, for instance, now that the Gateway, if we try out this API request, we see that we are currently having two map layers available for the frontend. The demo backhand served by the Gateway itself and the actual map layer, the national parks layer served by the backhand app. If we go back to the app, I haven't even refreshed it. We see now those two layers here for the two backhands that were registered here in the frontend and we see some data points were coming up on this frontend application. Another cool thing is that you can of course do interesting things related to code changes. Then I will come to that in a second, but let me just briefly give you some other kind of view. You can also like, I opened the Swagger UI now directly from the links section and you could say, well, you go out of Red Hat developer hub, that's not ideal, you can do it directly here as well. So you can switch just to the definition tab and it also loads within Red Hat developer hub the Swagger UI, which is also quite nice because it means you can stay without really leaving this central internal developer portal, right? You can also see, I mentioned ownership, so very often it's hard to understand who owns which component, the larger your systems and your landscape will grow, you also have that and you see that I am owning six services for APIs to systems and in fact, it's three, two and one, it's just double for each because I pre-provisioned the whole thing as a fallback scenario in case something would have failed. So you see basically that I have six services because I have the whole scenario running twice now under my user, that's the reason why you see more. But again, you can nicely navigate and filter and find things which sometimes is very, very hard in particular when you are not just talking about one dev team of five people, you meet probably every day and everyone knows who is doing what, but in a larger organization, it becomes very, very quickly a mess and you have a hard time to understand who is the owner of what, who is maintaining which of the APIs and so on and so forth. So also that is quite, quite handy to have right here in that catalog. Let me do one more thing and show you how we can make code changes. Now, what if you want to provide a new feature and again, referencing to the keynote earlier, we have briefly seen that as well, but those of you who haven't been here, they find that probably also right interesting. So I'm going to show that for one of the components that I have right here. Let me go to the catalog and go to my component and choose the gateway app. Let me do one thing. Just see that the gateway app just provides two hard coded data points in the US and it would be nice if we could add new data points to that specific map layer on the backhand side of things and so we can go to the gateway component, open the OpenShift Dev Spaces. Let's give it a few moments to load and this again, based on Eclipse K, will, yeah, I had one running, I will close that quickly. It will restart that workspace for that particular source code repository backing this component, the gateway component. And so it will spin up, in this case, a Visual Studio Code instance, host it for me so that I don't have to set anything up on my local machine when I want to contribute to that code base. It should be up and running pretty soon, I hope. We are there already. Yes, I trust myself in particular when I'm on a stage, so I will accept that. And so here we have this code base in Java using Quarkus and what we wanna do here is we wanna add a data point real quick but in order to do that, we see these are the two hard coded data points, just an in-memory list that is provided by this gateway component here and I wanna do the code is already there because in the interest of time, we are faster. I wanna just add this end point here real quick so that we can send a post request to the API and add a new data point. Let me now do one thing which you should never do, of course, maybe if you are at home and nobody's looking but I'm going to directly commit and push that into the main branch, right? We should run or build the application, we should have tests in place, all of that can be done right here from the hosted Visual Studio Code instance as well. I'm good and I'm just keeping that part and I'm just committing directly at post endpoint but you get the idea, you would commit to another branch and you would have your pull request and code review and everything as usual, right? There is no change and I'm going to sync that directly into the repo and once I have done that and this succeeded, we should be able to immediately see here that a new CI CD cycle is triggered, right? So the webhook, using a webhook, we were able to trigger the pipeline again. Now the pipeline is building that code change of the gateway app again and we'll redeploy it once it is done. And so in order that we don't have to wait for this right now, I will just do that part in the already provision thing or we can also come back at the end. Let me just do one thing and go to the slides one more and then we could add a data point here. But I'm about to wrap up anyway. So this illustration here basically now shows what we have just experienced together. It's a recap slide if you want to and it shows how all the components are working around Red Hat Developer Hub to give you the experience that we have just seen together. If again, very important to understand this is one opinionated way of doing things, you may or may not like it in exactly that way and if you don't, that's totally fine. You can swap out more or less anything for any other component, right? You would have Red Hat Developer Hub and you could say, no, we are, don't do GitHub, we do GitLab. No, we use Flux and we don't do Argo CD. Maybe you don't use Tecton but something else that's also fine. So you can again in an ala card sense put together what you want. If you don't want to use OpenShift Dev Spaces with VS Code but with IntelliJ, perfectly fine, just configure it differently and you are good to go. You don't have to use the OpenShift internal image registry, you can use anything else. That's important to keep in mind. So you can use it with whatever works for your company's current situation and needs. To summarize that, I hope the demo and the talk gave you an idea how an IDP with golden path templates can make you more productive. Can be that how Red Hat Developer Hub based on backstage can be that single pane of glass that you want to have the central place where you find everything, where you can explore, discover everything without worrying and searching for hours until you find, oh, is there an API for this? Who is the owner of the API and stuff like that? You can find everything in one place. We have also seen that we can provide and this is what the platform team again is doing, these self-service workflows with guardrails. You don't want to have uncontrolled tech growth. You don't want to have scripts flying around everywhere in a custom fashion. Nobody wants to maintain after a while. We also have seen how we can integrate with automation best practices for doing CI CD and we have at least seen a few of the things that we have in Red Hat Developer Hub to understand the health and maybe also some security things doing, for instance, container image scanning and integrating with Quay.io. There's a Quay plugin we haven't seen that but that's also possible. If you want to try it out, well, you can do that. This is in Dev Preview, a private Dev Preview, which means you can go to that QR code and then you can basically send an email that you are interested in trying out the Dev Preview and we try to get you going. You can get some support in a Slack channel once you are registered for this Dev Preview and you can then try it out from the convenience of your home and you can try to understand how you can benefit from this thing Cord Red Hat Developer Hub. With that, I am basically done. I thank you very much for joining. I hope it was interesting. I hope you learned something about it and thanks for having me. Thanks for taking the time and coming and joy the rest of the conference. It was great to be here. Thank you. We have one question from Marvin. The question here is how is your experience with Decton from a usability perspective? It seems very low level compared to, for instance, GitLab CI. So here, well, I would say from an experience idea, you are probably right, but the question now is, does the question come from a developer or does it come from a platform engineer? Because the developer shouldn't need to care about. As a developer, you shouldn't necessarily even know at least you shouldn't have to know if your pipeline is using Decton or something else. So this is again something that a very specific dedicated group of people is caring about. And so they are providing a way how to do CI with Decton and the dev teams are just a consumer of that. You don't need to know all the details about how to get going with Decton. That's the whole point of having and doing it once and then being able to consume it across all of your dev teams. And maybe you have one or two people that are really, really familiar and know all the gory details about Decton and all its peculiarities. And the artists don't need to care. At the end of the day, what you want to have is a working CI pipeline. As a developer, I don't care. Is it Decton? Is it Jenkins? Is it GitLab CI? Is it GitHub action-based? Well, whatever, I don't care. I want at the end that my app is built as a container and I can deploy it somewhere. And again, I don't want to take care about the deployment. We have seen that with everything set up based on Argo or Flux, this again happens more or less transparently. For me, I don't need to care about that. I hope that answers the question. There's a question that says, does platform support serverless same as AWS? I'm trying to make sense of that question. I'm not 100% sure what this meant with that. Maybe you can just elaborate a bit. But if the question is, can you use serverless concepts with Red Hat Developer Hub? I assume that was somehow the question. Yes, of course you can. And you can do that in a cloud-agnostic way and a Kubernetes native way using and integrating with Knative and doing serverless with that. Or you can say, well, you want to integrate with vendor-specific cloud serverless technologies such as AWS Lambda. Of course, you can do that. You can think of having a Terraform plug-in that allows you to talk and provision things in public clouds. I hope that answers the question. Okay, and with that, there is one more final question about if there will be recording a bi-level of talk. Yes, I think so. I think everything here is recorded. That's what these guys are here for. I guess you will have a recording at one point, but I'm not 100% sure. Again, thank you so much and enjoy the rest of the day.