 Good morning. Yeah, so I think so we can start right now. It's 10 30. Yeah. Sure. Hello, everyone. Hello, everyone. Hello, hello. Hello, everyone. Welcome. Welcome back. So before we start, I just have a few things to remind you all. You have a few things to ask questions. Actually, that will be a time for you to ask questions at the end of the talk. But before you ask questions, I just want to invite you to go to this mic to ask questions or ask me and then I'll pass the mic to you. So we just want to record any in-tapping interactivity in this room. So now I'm going to turn it over to our speaker. Thank you so much, Marco. Yeah, this is more like for the virtual audience because they can hear our question. Yeah, thanks again. Yeah, thanks for joining my session in the early morning on Friday. So I'm going to talk a little bit about the product like integration test for the modern Java application based on Quarkus next 25 minutes. I'm going to try my best to showcase interesting stuff. So my name is Daniel Lowe. I'm working for Redhead as the developer advocate slash team stuff for specifically cloud-never runtime. Like, of course, Quarkus, Springboard Node.js, and then some of the other stuff. And then I'm spent a lot of time to bring this new cloud-never runtime technology into sub-less, sub-smash, or GitOps practice and other stuff. Not only the technology, but also the bunch of CNCF projects. So that's what I'm doing for the last few years. And then here's my contact information, Twitter, and LinkedIn, or GitOps, and then BNURL. Yeah, feel free to reach out to me. So here. So every single developer has some common concern to develop business applications. So most likely they don't want to stand up or technology stack on their local machine regardless of the Apple Mac-on processor stuff or even huge desktop. Because it's something like a pretty difficult thing to install every single detailed application, just like a product. So for example, so many enterprise applications connect to a database like a data store, for example, Oracle Database or a Post-it call, et cetera. So how do you do that? So most likely you just need to pull down container image on your local or just install the lightweight database or just link to external database, et cetera. There is a known practice for developers regardless of Java, or .NET, or Python Go. So more interesting and preferred way for developer, including me, I just want to use in-memory database because I don't want to install the freaking big database on my local, keep consuming my resources. But I just need to better find my application functionality and capability on my local. That's why I prefer using in-memory database rather than actual data store. The problem is sometimes your application code only works on your local environment. And then once you deploy the application to the product and production environment, it doesn't work sometimes. But it's sometimes too late to figure it out because you have a fantastic CI-CD pipeline to deliver that application to products in the next three minutes. So in order to avoid that kind of issue, a lot of enterprise companies, they have their own query team to make sure everything is OK, like a functional test and a security test. And there are a lot of tests they actually doing as a part of integration testing. However, what if you could have that kind of testing capability on your local environment at the very beginning application development phase we can call that in order to develop a process. If you could have that kind of stuff on your local at the very beginning when you're writing code, and then it would be very good practice to avoid unnecessary integration tasks somewhere out there. So the one caveat to make that happen for developers, they need to set up these continual testing or some testing tool or testing stuff with some third party library or another tool on your ID tool or even local environment install somewhere with software. So today I'm going to talk about the Quarkus. So I believe some people already know what Quarkus is. I'm going to really quick introduce Quarkus. It's a Java project like a Spring Boot, not a product. It's just a project. And then, of course, Red Hat provides the product grade Quarkus stuff, but it's a 100% open source project, provides some of the capability for developers to build and implement cloud-related microservices that focus more on Kubernetes cluster. And then it's a really fitting feature to enable your application into sub-a-less or even even architecture. So if you just want to develop your Java application on virtual machine rather than Kubernetes and container technology, you don't need to consider about Quarkus. So Quarkus is pretty much focused on Kubernetes because every single Java framework was born for Kubernetes back in 2014. It's more than 10 years or eight years. And then that framework gives some more effort to improve or optimize the features underlying technology along with Kubernetes and Linux containers. But that's not all the time. However, Quarkus was born after Kubernetes and Linux container, which means we design and build on all features and capability based on container and Kubernetes. So here is the main goal of this topic today. So zero configuration whenever you wanted to use this kind of data store, like PostSQL, MySQL, and MariaDB, or MSSQL server, even IBM DB2, or Oracle Database, you just add some, for example, Maven project on Java world. You just add relevant Maven dependency. And Quarkus already did take all. You just added a JDBC PostSQL dependency on your Maven project. And Quarkus tried to stand up relevant container image automatically as long as you have a container runtime, for example, Docker or Podman. It's not the end database. It's more like all enterprise common practices, something like key clock server or security application, for example. You need to secure your RESTful API. Also, you need to store and save your store data based on a hash corporate vault. And then there are business automation and disparate cache server. And then there are also those Kafka server back end and a messaging server, et cetera. This kind of all-enable developer, quickly, easily standard of technologists, they're just like a production environment. So I'm going to stop my boring talking here. I'm going to jump right into my demo environment. OK, here we go. Hopefully everybody see that. OK, so I'm going to use Quarkus CLI command line interface. You can also use Maven, Gradle, whatever you need for Java packaging tool. The reason why I'm trying to use Quarkus CLI is pretty much easier to schedule a new project and then generate some of the example codes and then build and deploy applications with auto-comparation and auto-comparation features. So Quarkus create command line, let me create a new application. I'm going to add a few dependency, which Quarkus calls extension, the JDBC to PostSQL, which allows me to connect to database. And then Hibernate ORLM, Panache, is more like water mapping from Java Vins to your database. And then here, the rest is Jackson, which allows me parsing and consuming JSON format data. OK, I just create a new one. And here is a new project. I'm going to open my project with my ID tool, which is a VS code. Excuse me. Everybody see that? I'm going to make it bigger. OK, sorry about that. So Palm XMA is just a Maven project. You can also generate the Gradle. So here's a new Quarkus project, new version. And I just edited or viewed dependency. It's automatically downloaded the dependency library on my local file system. And then when you go to generate sample code, like just hello world, like hello and let's see hello world. So first thing always, I'm going to start Quarkus demo, which is a development mode. It provides the live coding capability for developers. Every time you want to write a code in Java world, you have to compile, build, and restart applications. This is a totally pain in the ass. However, so Quarkus actually provide the live coding, just like the Node.js JavaScript. Whenever you change the code, it automatically compiles, rebuild, and restart. You don't need to do that manually. Quarkus take care of that kind of stuff. And then once you run the Quarkus demo, and then Quarkus actually stand up automatically for SQL and test container. You don't need to use a Docker CLI, like a Docker run, Docker pull, because we already edit the JDBC, pull SQL dependency, which triggers, start this container on your local file, local system. When you press D from the runtime environment, and then the Quarkus provide the WUI, which is one of the cool stuff to showcase the graphical UI, what kind of capability you have on your local motion right now. So here is a DevService UI. And then, as you can see, the old compilation automatically set it up for you. In the meantime, when you go back to ID 2, and here is the application property, or application YAML file, you can see there's nothing. Because Quarkus automatically set it your configuration, how to access the database on your local-based container. So this is one of the great things to reduce integration testing in the end. Because you just do and create the project as a normal process as part of the inner loop. And then you just start runtime. And then you already set it all kind of back-end app store like a database. This is one of the cool stuff today. And then, now I'm going to go to terminal and then open it here, and then try to access endpoint, something like a hello. And then I got to go to hello less easy, which is cool. And here is the press R, which you resume continuous testing. It's an auto box feature on Quarkus. And in the meantime, it's just a restart container because it makes sure everything is restarted. And then you can see all test case one is just passing. Because when you go back to our project, and then we have a one test scenario, something like a hello here. So if you just change, for example, so I'm going to change return like a hello.devconf.us. us, save a file, and then back to the terminal, you can instantly detect the error. It's just not right now. So expected return is hello less easy, but I just changed the code. However, when I actually call the application less UI, and then I got to the hello.devconf.us. This is an interesting part because many developers, OK, I don't want to spend time to check my unit test on my local because I don't have enough time. I'm so crazy busy to implement the business to functionality or the requirement. That's I'm so into it. I don't have enough time. And I just checked my application functionality, which is really cool. I don't have any issue. I'm going to just come in and push the code into the GitHub or some source control repository. And then after three hours later, and then some guy from SRE team were creating, hey, then we've got some issue. Your code doesn't work on my production or testing environment. No way. That's not a problem. It's your challenge. Because my application is totally 100% working on my local. I'm 100% sure you should figure it out. And then he said after maybe half a minute later, and he said, oh, it's not our problem. It's your problem. So this is actually happening all the time. That's why everybody in that way and also never, ever become friends. So if you avoid reduce that kind of challenge at the very beginning, everybody happy in the end. So this is one of the cool stuff. I just changed one single line. So for example, so maybe I'm going to just copy, create a new unit test here. And let's say like a new testing method, like a greeting. And then I'm going to new path, like a greeting. And then like a hi, corkers, folks. And then like a dead, my name. I just save file. And then back to here. And then you got some adder. Because I just added a test case, just like a test-driven development. But I don't actually need to edit any actual less API. So I'm going to go back to here. This is a test-driven development practice. Let's copy new here. And then need to add a new one, greeting. And then try to method, change the corkers, folks. Then I just save a file. And then automatically running the application in the meantime, when you go to the UI, you can actually have view here. And then you're going to have the adder console, something like that. Let's go back to here. I got some still adder, so greeting. And then corkers. I'm going to just copy in the back to here. Make sure the same pass. OK, so that looks good. And then back to here. We're only faced to complete adder. Oh, this is old one. OK, so this is how to actually doing the application. So I'm going to add one more thing. Maybe I'm going to just do test scenario, test, just greeting, value, that's it. OK, let me try to just comment. It's too long. And then I'm going to try to new file here, just one more thing, to create the database stuff. Because so we just already stand up the PostSQL database. And then I'm going to add a new entity here based on Panache entity, which allows me omit a lot of fundamental operations. So for example, I'm going to add a new data name. And also, I'm going to add a new data address here. And then one thing I needed to do here. So I just copy this one. And then I'm going to change it to return time. Like a JSON format. And then new path, let's say person. And then I'm going to return this collection. And then the person data, just to retrieve all data, something like that. I'm going to go to person, restore. So previously, when you develop data transaction business logic, you have to implement all fundamental operations. For example, restore all defined by ID based on Hibernate Orale, which is some kind of boring job. But you still have to do that. However, using Hibernate Orale and Panache extension, which makes it easier and quicker to have that kind of functionality. So I'm going to add that kind of stuff. And then just add one. Here's one type. OK, I just saved file. Let's try to add one more thing. One more method here just to retrieve a based on ID. And let's say method name defined by ID and path param. So we're going to pass by ID. Here we go. And then it's not this type. Just return one person data. And then the Panache actually provide the same capability defined by ID. Here we go. OK, it's cool. So I just created two less API. And I'm going to add one more database, the insert data. Let's say insert into person and values. Value Hibernate sequence. And then my name Dan. And I'm based in Boston. And actually, I'm living in Brookline. So I'm local here. So let's try to add a few more data here. Steven, like New York City, and James, like New Jersey. I just saved file. And back to here. And now I'm going to try to access the new URL. And then as you can see, the data already inserted to the database. And then you can also find the special ID. And then you automatically find that kind of stuff. So I just need to keep developing some business logic based on a data transaction, or a RESTful API. But in the behind the scenes, the Quarkus actually stands up the database based on the post-SQL. And then in the meantime, I never ever try to stop or review or recompile rerun because Quarkus gives some out-of-the-box live coding capability. So this is the modern Java application development for continued testing and reduce a lot of burden of integration testing from another team. So we almost are out of time. So I'm going to go back to my slide show. OK, so I'm going to give you one quick summarizer here. So if you want to keep adding more capability, for example, in that demo application, so you're going to add the KeyClub server based the authorization on the application to secure your RESTful API. For example, one method that you can just access by specific role, admin role. Well, you can allow some RESTful API access by a user role. You can actually do that based on KeyClub server. But you don't want to run how to configure my KeyClub server to install and create RAM or the other stuff. So in order to that, so when you just add OIDC or KeyClub dependency on your Quarkus project, and Quarkus automatically start up your KeyClub server as a container, as it installs, it creates all relevant security RAM and then default users with some specific role, like admin or a user, any other stuff. That's it. So I actually created a bunch of the tutorial video, including this topic. And so people access my channel, building URL, and then you can scan the QR code, and then just watch it, whatever you need. Not only Quarkus, but also Kubernetes, and then DevOps, and then a lot of practice, serverless, and function. And if people add your comment, thought, and suggestion, it will be very helpful for me to create the next video. OK, so I think I'm done today. And one more thing. I actually brought some books here. I have five. So this is the Quarkus for Spring Developers book. So my colleague, Editing Andrea, actually wrote this book. I actually participated in writing this book, just one chapter, for creating how to develop an event-driven application communicated back in Kafka server. So the Reactv programming and event-driven architecture is a super popular thing to see era. And then you can figure out how to develop that stuff using Spring Reactv versus Quarkus. But also, there are multiple use cases and practice, how to develop RESTful API with Spring Boot and Quarkus, and how to develop database transaction like a JPA, and how to develop Reactv application, and how to deploy application and optimize Java application to Kubernetes from Spring Boot versus Quarkus. So this book, you can actually download it. And from Evo, today I'm going to bring this one in five. So if you are interested in reading this book over the weekend, just for fun, and let me know. And then I can give you five books. All right? So we have a lot of one minute. And any question from virtual audience, Marco? OK, that's good. And any question from here? Any question from this topic, or Quarkus, or any other open source project? There we go. Give me one second. He bring the mic and for the virtual audience. Thanks for that. So when you start the containers using test containers as part of the Quarkus framework, is it expected that you access those services externally from the Quarkus application and do something else, like populate the database or configure something extra? Or is it not to use it outside Quarkus? Yeah, that's a really good question. So basically, when Quarkus start the dev service, this kind of container image, like a key cloud or a cop custom, you don't need to set up any configuration by default. However, so you're going to customize that configuration. For example, where is my bootstrap server? Where is my ID and password to access that database on under the stuff? And you can also specify the specific container image. For example, the ID for post-seq or 14, just download it and pull it. However, you just need to do your specific container image from your corporate registry. You can actually do that. But there are some limitations to wrong dev services on each dev service container image. All right? Any question? What sort of plumbing is required if I had a custom service that I wanted to use? Is it my phone? OK. Sorry. Can you hear me? Yeah, sure. What sort of plumbing is required if I want to take a custom service that I've built and use it, integrate with it from a Quarkus service that I'm writing integration tests for? So if I need to discover a URI and port and password, what would be required for me to add that to a Quarkus project that I'm building? So your question is, what is the best way to connect to existing services from Quarkus application? So it depends on what kind of the regular application or existing application you want to consume. So it's just standard Java framework. And then, most likely, Quarkus actually prefer use with the RESTful API or consume like a back-end integration server like a cop car or a messaging queue server. As a result, Quarkus provides a GRPC protocol so you can just create a stop class based on GRPC and then it manages multiple protocols to connect to the legacy system. And then, so Quarkus provides a more modern way of not just using an MQ or some of the stuff. So the REST client and then GRPC, and it was sometimes using the back-end messaging server or the event-driven server. That is the more suggestive way to communicate messaging between Quarkus application and then the other legacy system, OK? All right? I think so we've done. And thanks for joining today's session. And I'm going to stay around here. And if you have any questions, just write me down. Thank you again, and have a good rest of the day. Any interesting discussion? It's not in the way. One more. Yep. Should be almost done. Let's see. This is kind of annoying. It takes some time to load the images, but. Yeah, it should load. We hope so. Otherwise, we can start over. No. We'll sort this in here. All right, good morning. Can you hear me well? Cool. Nice. So good to be here. So I am Giga Solato, Guilherme Solato, and this is Alex. And we are both from Red Hat, engineers at Red Hat. And today, we'll be talking about different ways to add authentication and authorization to applications. And as developers, we want for our applications to run in a way that people can or, as always, all the applications can use our applications. And at every use or at every request of applications, we're able to identify who is using the application and make sure that the people can only do with the application what they are supposed to do with the application. So basically, we're talking about authentication and authorization. And there's more than one way to add authentication and authorization to our applications. Traditionally, one way to do that is by writing the rules, the code, and the structures related to all ourselves as part of our application's code point. That is one way to do it. Maybe we, for example, we create a user's table in our database and request users to create a password as part of the sign-in app process and verify that at every login. Maybe we create a session token, something that we make possible for the user not to have to repeat that every time or whatever. And then we start spreading a bunch of ifs and conditionals along the code to check whether the user has been authenticated, if the user has access to that particular function, and so on. Sometimes, we resort to code libraries that abstract some of those functionalities for us and to make our lives easier. So as long as we invoke the right functions at the right places in the code, we're good. And that lab takes care of most of the heavy work for us, or more or less like that. Anyway, and then there is another way to implement auth, which is the approach where you basically shield your application inside of a shell that makes sure that at the entry point and from the outside of your application, all requests targeting your application is first verified. And we will be focusing more on that less approach here today, and especially giving a comparative perspective in contraposition to the other more traditional ones. So by the end of this talk, the idea is that you all have learned a little bit more about how to add authentication authorization to your applications, especially running on Kubernetes, leveraging Kubernetes, find the kind of cool thing that Kubernetes offered to us. And so without further ado, let's jump right in. So this is the agenda for the day. So we will be covering very quickly how we've been doing all so far. We will oppose that to this other approach of all extract or externalize outside of the code. We'll be presenting a new tool, which is a flexible Kubernetes native external authorization service, and zero trust, just another password here for something that will make sure that no requests will go through to your application unless it's first verified. We have a little demo, right? And wrap it up. And Alex will then give us a glimpse into the future and the things we have ahead of us. So let's start with how we've been doing all so far. So more or less like this, right? So baking authentication-related code as part of our applications code has been one of the traditional approaches, as I said before. And not unusually, or quite often, that leads to some spaghetti code, right? Because once you start writing the if conditions here and there, it's not unusual that you find yourself in that position where it looks like spaghetti code, right? And even with the libraries that I mentioned, that you make part of the code, that also comes with a cost, of course, right? Maybe they do help you in terms of the complexity of your code. Maybe you can do it right. And it's not hard to read or whatever. But once you do that, you start wondering about or worrying about the matching versions and compatibility matrix and this kind of stuff, because now you are at independence to your code drawing. So quickly summing up a couple of things that we see that are kind of challenges of the traditional approach where you add all as part of your applications code, right? So spaghetti code already mentioned. The thing with dependence is when you start adding dependencies to your code, so they are now part of your applications code as well, right? Coupling, it's also something that it's very hard to avoid. So you depend on something or you write a function that you are invoking directly in your code. So it's kind of tricky to avoid. And I mean coupling of code. But other levels of coupling as well, for example, of scalability, right? If it's all part of the same process, if you need to scale one of those concerns, either the core business logics of your application or the old concern over application, you need to, of course, scale the other by the same factor as well, right? Duplication of code, right? So you start seeing yourself repeating the same patterns all over your code. So it's also something that it's a challenge. Even when you do all that above right, right? And you manage to work around those things. And one situation that you quite often find yourself in is you start scattering the old related logics or your old policies across not only your one application, the code for one application, but especially if you consider the organization level of containing multiple applications. So definitely you are scattering all three related stuff across multiple applications, right? So we need to know where in the different application different parts of the code the old related stuff is. Right? Something not very often mentioned, but sometimes we make accidents, right? Do accidents happen? So just because things are too close to each other, maybe you are touching one thing and accidentally you touch something that it was not supposed to, right? Maybe you close some if condition in the wrong place, right, and things go in a different direction than expected, right? Atomic builds, and by that I mean if old related stuff, and whenever I say old, by the way, it's kind of a short for authentication and authorization, right, or anything related to those things. When those things are part of the same application, right, if you need to change, if you modify anything in the old concern, that also means rebuilding the rest of the application, obviously, right? So it's all part of the same thing. And that translates to things like longer CI-CD pipelines, more testing, and et cetera, right? But I think that a common denominator between all the above is that when auth is implemented as part of your application's code, most, if not all, of the responsibility falls on the hands of the developers, right? So usually it's only the developers, the ones who are touching that part of the code, touching the code. And if anything is needed related to interpreting those policies and those rules implemented in the code, usually people go to the developers, right, because they are the only one who understand, they are the only one who know about that, right? And that is a lot of responsibilities, right? So what can we do different here, right? So opposing that with the externalizing auth approach, and by that we mean, hopefully, an example will help us here. Imagine that you have a few applications, right? Let's say you have three different applications implemented in three different technologies, like the GoLang application, the Java application, and a Ruby, OK? So this is the traditional approach, right? Where you have, let's say in a GoLang application, you have some piece of code there that integrates with some legacy auth system, right? So these extra boxes there, they represent auth-related systems that you need to connect with, OK? Perhaps for your Java application, you went for a different approach, right? For example, you imported a library that handles some complex auth-related stuff, like the connection with some centralized auth server that, let's say, a disorganization this company needs, right? And the same for the Ruby application, right? So when I say externalizing auth, effectively, I mean, what if you could take, as is those things, right? First, just imagine for a second, you could remove those to a separate layer. That's called as the external auth layer, right? So in a way that you can start decoupling and detach one thing from the other, right? So the most immediate benefit that we can see here is that that allows you to focus more on the core functionalities of your application and hopefully simplify the implementation of your application only because you manage to extract the important concern of that application to another layer, right? And I'm literally meaning another process, right? Not the same application here. And by doing that, if things are truly decoupled, right, thoroughly separate from each other, that perhaps that gives you also some flexibility to start changing a few things, right? For example, let's say that for that GoLang application that you had a legacy system, you want to replace and experiment with some novel auth technology, right? So you can do that in a fairly transparent way to the application itself, right? Maybe you have kind of company-wide policy, organization-wide policy to start using more the centralized authorization service and you can do that, right? So you can start changing things here and there, replace another legacy system here for the centralized auth server. So you can focus on that external auth layer and no longer touch the core application code, right? Things that, just again, right? That you spot being done in the same way in different places, maybe you can start adding some standardization, simplification there, right? Unify things, right? So to sum up a couple of things that we see that are benefits associated, usually associated with the approach of all extracted outside of the application's code. Number one, separation of concerns, right? So now you have auth as a separate concern. Then decoupling, as I said, truly talking about different process, so that they, of course, they have some way, right, to some agreement between them, but not necessarily they are, definitely they are not coupled to each other, right? So there's no coupling of code and most importantly, there's no coupling of scaling, right? So they are truly two different process. If you identify a bottleneck in one of those, you can scale independently from the other, right? I already mentioned simplicity here, flexibility to change things, right? Standardization, in our example, that translates to things like improvement in ability, so it's better to maintain your, not only your code, but also your policies, right? Testability, you can test things independently from each other. This is actually a very important point to highlight here, governance as well, right? So now you can, by not depending, and that's actually my next point, but not depending too much on the developers to respond to things scattered across the code and hate them somewhere in the code, right? You can also start, you know, freeing yourself from that responsibility and you can have other people participating in the process that are more focused, for example, on the all part of things. Not even if that means just auditing, for example, that you're all policies, you're all rules, and we're enforcing things that are meant to be forced from the level of the organization, right? So you definitely also improve in terms of governance, right? And again, as kind of a consequence of all the above, you also are able to split the work and split the responsibility more with more people involved. So it doesn't rely everything on the hands of the developers. I think that that is one very good point if you can, you know, extract that concern, right? So basically we want less of this and more of that, right? And well, so far, it might not have sound entirely new, right? External auth is not really something that was invented yesterday, right? That is basically the approach that has been taken by API gateways, right? Psycho-approxies, so what's new here? Right? Well, the context has changed, right? So we believe that now it's a way better timing, you know, for this approach than ever before. Now we have these amazing tools that we can rely on, that we can leverage more of, right? Like Kubernetes itself, obviously, right? So Kubernetes, apart from the well-known and excellent things that it brought to us in terms of orchestrating our workloads and so on, right? It also comes with a bunch of extra things that are super cool and super useful for us developers, such as the whole thing with the extensibility that it provides to us and the way that it makes possible for us to extend its API and how interact with things running on our Kubernetes clusters in a more seamless way in a language that is basically the language that we do of the rest of the setting up of our applications nowadays, right? I'm basically talking about things like Kubernetes custom resource definitions, which we'll be covering a little bit better afterwards, but and also things like the Kubernetes operator pattern and how those things, you know, really empower us as developers or features like the Kubernetes authorization system, authentication authorization system that is basically now almost for free for us, right? So we can use those things. But also I'd like to mention other tools that we have at our disposal nowadays, like the Envoy Proxy, which has been putting bringing API gateways and cycle proxies to an entirely new level, right? So in terms of efficiency, performance, but also functionalities, right? And also issue, which is kind of the merge of those two things, right? So we're talking about Envoy and all its power running on Kubernetes with a proper Kubernetes way of interacting with, right? So a proper Kubernetes API, right? For that. So let's then talk about something that we've been working on for the past few months, right? So thinking on all of that and all those challenges, all those benefits and that now is the time. A while ago, we started working on this project called Authorino, right? And it is something that allows us developers to quite easily and in a very flexible way to add identity verification or authentication to our applications. And it's quite smart actually, because it allows you to add authentication, authorization based on multiple different ways or methods to implement authentication. For example, from API keys based authentication to authentication based on JSON web tokens. Yesterday we had a very nice talk about JSON web tokens here in this very room. So different methods to implement authentication. I will also add here a very interesting way and that Kubernetes providers, which is authentication relying on Kubernetes authentication. And effectively I'm talking about an API of Kubernetes called Kubernetes token review. And amongst other, and because it supports multiple different ways to add authentication or identification to the application, it also comes with a couple cool features to implement things such as something that is called token normalization. Because it allows you to trust different sources of identity or different identity providers and combine those since those multiple different providers of identity can different, and usually they different in terms of this structure of the how they represent the identity or the person or application behind that thing that we call the identity. In order to be able to simplify your actual authorization policies that you want to enforce afterwards, sometimes it's desirable that you can in the middle do this that it's called token normalization. Basically, you normalize those different structures into a simple one that you can address more three, four later. And that other thing that authoring help us with, which is effectively enforcing authorization policies, right? And it also comes with different ways to represent to express authorization policies from simple pattern matching authorization rules like checking a claim inside of a JSON web token, a Jot claim, right? Or checking whether the user is bound to a particular row in let's say a row-based access control authorization model or Narbach authorization model, right? From to OPA, I don't know if you are familiar with Open Policy Agent, which is a very powerful policy engine. Authoring also comes with a built-in OPA module that allows you to express authorization policies also in OPA and it will evaluate those. Or integration with also Kubernetes authorization system. So Kubernetes also offers us an API called Subject Access Review, which authoring no integrate with. So meaning that we can represent the authorization policies or bindings, meaning who can perform what using the Kubernetes API for those. So you can create rows or row bindings in Kubernetes and authoring no can be used to check those with the cluster, right? It also comes with some cool features like allows you to go and fetch ad hoc metadata for your authorization policies. So like send a request to an HTTP service and get some metadata that will be combined to your policies to evaluate your policies at runtime. Obviously, and then also because of that, it also comes with lots of caching functionality. So you can cache metadata that you pulled from external services, results from previous authorization requests that were evaluated amongst other stuff, right? It also allows you to inject some data in the request before the request even hits the application that it's protecting. Remember when I mentioned a shield, kind of a shell that you put your application inside of. So authoring no basically what we're talking about and now we'll come back right to this just in a second. Basically what I'm talking about is this, right? Remember from the previous slide, so we're talking about an agent, an authorization agent that would help us achieve these path here. So manage and implement that part, okay? So what about what authoring no is not just to reinforce here and put in encounters, okay? So authoring no is not a proxy, it's not a gateway, and void that I mentioned, for example, is a gateway, right? Or it's a proxy. Istio has a gateway, but authoring is not, okay? It's not an identity provider, so it's not something like Keek Logo, Red Hat SSO, which are excellent identity providers, right? It's not an open ID connect server. If you are familiar with the open ID protocol for authentication based on OAuth, right? OAuth2. It's not a no OAuth2 client or a no OAuth2 server. It's not an SSO server. Again, we have excellent solutions for SSO OAuth there, so it's not the idea. It's not an identity broker for user authentication, like something that will redirect the user to some identity provider like GitHub or Google account, right? Again, we do have excellent solutions for those. It's not a co-library that you import and build as part of your application, right? It's truly external authorization. It's not a data storage for your authorization metadata, right? And some confusion that sometimes happens here, it's neither a rate limit in service, right? All right, so how does it work, right? In super high-level description, in the control plane, we develop as we build an application, right? That here in this scheme, we'll be calling the upstream app or an upstream API. And we deploy that application behind an Envoy proxy facade. So Envoy becomes the front end for your application. Then the developer applies a Kubernetes custom resource defined by the authorino called an auth config, right? So the auth config is basically a recipe that represents the protection you want to be enforced for your application, right? So before any request reaches your application, that auth config will be executed, will be enforced. So in the data plan, clients or other applications, users that will consume your application cannot obtain a token or a form of access for your application, right? And start sending requests to the application. Effectively, what's going to happen is this, right? So the client sends an HTTP request, for example, doesn't have to be an HTTP, but for example, an HTTP request, Envoy is the facade that actually serves the request, it receives the request, right? Envoy establishes a fast GRPC connection with authorino, the external authorization service, that basically enforces that auth config, custom resource that I mentioned. So it defines, looks up, it finds, and enforces that auth config. So it verifies the identity or checks, again for the authentication. Occasionally, it patches external metadata from external sources configured, enforces all the authorization rules and policies. If it's the case, can customize some responses, inject some data, get back to Envoy with either okay, not okay, was authorized, not authorized. And in case of okay, Envoy redirects to the upstream application there, okay? Which effectively serves the request, right? Again, through Envoy. So this is basically how it works, okay? Okay, if we jump to a demo, right? Cool, let's see, I think it should be on here. Just start this over, yep. So let me first describe what we're gonna be doing here, okay? So let's say on Kubernetes, we will deploy an application that it's called the news agency API. So it's an API, a REST API, okay? For short, just news API. And it has, it exposes just a few endpoints. So you can create a news article in the news agency API. You can retrieve the list of news articles and so on, right? Just a simple set of endpoints. And you can perform those, always linking the news article to a particular category. So for example, I want to create a news article in the sports category, for example, right? Retreat the list of news articles in the politics category, so on. And having that application deployed in Kubernetes, in order for users and other applications to be able to consume the REST API, news API, we need to create a service, right? So this here, hopefully you can see my cursor, right? Yeah, this is a service, right? For example, news API.svc, right? It's actually longer than that, but I'm basically talking about the Kubernetes service here. And in case we want to expose this API outside the cluster, we also need an ingress, right? So we also need a way for the request to come in. So effectively, people or other applications from outside the cluster can send requests, right? Through the ingress and that it will basically hit the application. This is the application without any protection at all, right? Simple application. Let's run our demo to this stage very quickly here. Maybe I can bring this here. Yeah, so what I did here upfront was I already, basically I created a Kubernetes cluster here using Kyn, I don't know if you are familiar with Kyn, but it's an easy way to create your own containerized Kubernetes cluster locally. That's very good for developing purposes, for testing purposes. It's kind of a more lightweight alternative to a mini cube, for example, right? Even more lightweight. So I already created a cluster. I deployed the authoring operator that allows us to request for instances of the authoring or authorization service, right? Couple more things here just to make our life slightly easier. I also deploy an instance of a key clock that if we have time to cover, I will cover also in our demo, right? How we can integrate with an old server like key clock. So let's, and those things that I've just mentioned are things that are typically done by a cluster administrator, right? It's not usually part of the developer's workflow, right? But anyway, so let's render demo. So let's create a namespace for the news API. And now let's deploy the news API here. And just show very quickly here this YAML file, but if you are familiar with deployments in Kubernetes, nothing much to see here, right? So it's just, it creates a deployment for the news API, so there's the image here. It exposes a port, which is the port that the news API will be serving requests, right? So we create a service pointing to that HTTP port, that TCP port, right? And also the ingress here, right? And with the news API running, which we can see here, right, at the bottom, we can send requests to the API. So basically, this is what we'll be doing here. We'll be sending a curl to the news API. So this is the ingress we created, right? And this is a get request, right? Implicit get request using curl to the slash sports endpoint of the API. So I mentioned before, but it's also here. And by the way, this will all be available on GitHub, of course. So if you send a get request to a slash category, you will get the list of news articles in that category. So let's do that here, right? So it's in the request, nothing really surprising here. So the request hits the application, the application serves a request, which is just an empty list because we haven't created any news articles so far, right? Okay, so the next step, let's lock that application down by putting it behind the envoy, okay? So I will create an alterino instance here. I will deploy an instance of alterino, right? In this case, what I'm doing, I will be requesting an instance of alterino that will be mostly dedicated to the news agency API. So it will be running the same namespace. There are other ways to do this, right? Like I like a shared instance of alterino serving for the entire cluster, but in this case, it's a very simple instance of alterino, which we can request by creating an alterino custom resource here, which will be handled by the alterino operator. So once we do this, once we apply this, the alterino operator will spawn an alterino pod for us in the news API namespace, okay? For simplicity, also we have TLS disabled here, otherwise you would see just a reference to a TLS secret defined in the cluster, but it's as simple as you can see. There are some other configuration options here, but this is the simplest instance we can request. And then let's modify the deployment for the news API, effectively putting it behind the envoy. So let me show the difference here very quickly. So it would be, oh yes. It's the same deployment that I showed before, right? We're just adding here another container, right? Based on an envoy that serves on port 8000, right? And we will rewire the service, the same service before, now pointing the request to the port at TCP 8000, and the same for the ingress here, right? This is a config map for the envoy configuration, right? Nothing much to see here, except that this is something that in the envoy configuration is called a cluster. It's basically, it represents a connection with another service, right? So this is a connection that envoy has with the news API as local host, I think this is the important part, right? And the connection with the authorino authorization service for envoy to send that GRPC request and get authorino check in the request, okay? Apart from that, it's just a very simple and straightforward configuration of envoy that routes all the requests to the news API, and but not before first checking with the external authorization service, okay? Oops, apologies. Okay, so envoy deployed, if we send a request now to the same endpoint as before, instead of hitting the API, what we get is a 404. And this is exactly what we expect. As you can see, it's now envoy that is serving the request, envoy talked with authorino, authorino doesn't know anything about that service just yet, right? So it returns with service not found. So we need to let authorino know about the service. And we do that by creating an auth config. Effectively, if we go back to our scheme here, what we did was we added an envoy, we added authorino, and we will be creating now an auth config. Dissolve config looks like something like this instructor, right? So we have the host. So we are associating this auth recipe with request into this host here. We will be enforcing authentication based on API keys. So usually we need to have an API key to authenticate to the news API. And we will be using the Kubernetes ABAC for authorization, okay? So for the auth config, if we send a request now, instead of the 404, we get a 401. And that is because we are not passing a valid API key to authenticate. So the next thing we need to do is we need to create an API key. API keys in authorino are represented as simple Kubernetes secrets with the convention of having this key here API key. So this is the secret. And we will be using the annotations of the secret to store some metadata about the identity itself, right? So we need to create this API key. If we send a request now with the API key, instead of the 401, we get a 403. And the 403 because in the auth config, we say users need to authenticate with the API key. So authorino will check for the API key. And then we'll check with the Kubernetes our box system to see if that user is authorized to consume the API. So effectively you'll be issuing a Kubernetes subject access review request, okay? With the Kubernetes. And the user information will be coming from the Kubernetes secret and the metadata of the secret. In the annotation, we get the username, right? Some extra attributes here to complete the subject access review. Since we didn't bind the username that we stored here in the API key called john, to a row that grants access to the new API, we get a 403. So we need to do that. We need to create a row in Kubernetes and a row binding that binds the user john to the row. Once we do that, if we repeat this request, now we open it up the application again and it's been serving the request. And we brought the application to this stage here. So we create the auth config, an API key secret, the user knowing the secret can send a request and while it receives the request, talks to authorino, an actual external authorization service, right? So the news API knows nothing about how this is implemented. Authorino contacts with the cluster API, checks for the authorization, the Kubernetes Zabak system and serves the request, right? In fact, we can even create another news article here. So if we repeat this again, now it's there, okay? Right, so I'm afraid we're running out of time here to cover more stuff that we can do here. But once we, since we represent all of that auth scheme in that auth config customer source, in case you needed to change something, it should be as easy as modifying that auth config. For example, if you want to integrate a key clock, you need to open that auth config, you modify that add key clock there, and authorino would all of a sudden start accepting not only API keys, but also access tokens issued by the key clock server, right? I guess we won't be needing this because so far, I must have done something wrong because everything went just as planned. So that never happens. So, rubbing it up, so authorino is an external authorization server that implements the Envoy external authorization protocol. So it makes it super easy and flexible to implement authorization from the outside. It is a last code to no code solution. So as you can see, this speaks the language that we all know of Kubernetes. So you define an auth config with basically a YAML that you apply to the cluster. So that means you use the same API, the same tools you are used to do. You can automate all that stuff, right? And you don't need to change anything in your applications code. It's called native, so it comes with all the benefits of called native applications, and of course, why not authorino's open source, right? This is just a bunch of things that I will leave here for you guys to check later, but are basically listing a bunch of acronyms and terminology that relate to different protocols to implement authentication authorization that are supported by authorino. I already mentioned a few of them, okay? And now we'll invite Alex to give us a brief glimpse into the future and see the things that we have ahead of us and how we can bring this to the next level. All right, I'll try to do this really quickly. So I'm gonna talk quickly about quadrant, which is sort of the parent project where authorino is hosted. And it builds on this concept of policy attachment that comes from the gateway API, which is a new API that you're gonna get for your Kubernetes clusters, right? So quadrant has authorino as you just saw and also has limitador, which actually is the rate limiting service back to the point why authorino is not bad. So what we get now is almost the time is the gateway API. So the gateway API provides us with new APIs, more YAML, don't you love YAML, right? We all love YAML, we're YAML developers. So that lets you express different things, but the most important stuff is that it splits the responsibility, right? So what a developer cares about is making a service available to the internet. Well, probably your infrastructure people and your cluster people care about like how that makes it into the cluster, right? So that the whole gateway API comes with this concept of policy attachment that itself comes with defaults and overrides, but I'm not gonna cover any of that because I don't have time. But what we're aiming at doing here is letting the proper actor express the things that they care about in their own language, right? So in the case that we just covered, possibly it's your enterprise that decides that none of the services available in cluster XYZ can be accessed by unauthorized people, whatever that means, right? Then maybe you write a specific service and on top of that where you want certain roles to be expressed and so forth. So things vary in terms of who the describes what the limits are, let's say again, in terms of identity and or access to a service, plus that also, by the way, evolves over time, meaning that you could end up in a situation where I don't know, your service is under DDoS, you need to stop all the traffic for everybody except the admin user that needs to create the news articles for reasons. So we're quadrant tries to come up with a set of operator and controllers that you can deploy on your community's cluster and then have those people express in terms of policy, for instance, what is it that, how do they wanna protect their different services? The same is true for rate limiting, but yeah, I won't discuss any of that. Yeah, I won't discuss that. So quadrant comes with limited or as well, all of that stuff is about to be released in one zero. Some of like limited or has been run for like a long while within Red Hat internally, but we slowly like all that stuff is open source, obviously, we're slowly releasing that to like a broader set of people, I guess. So if you wanna know more, you can go to the very early website over there or the GitHub or grab GEE or myself somewhere. And I don't know, beat us to death with questions of which we may be able to take a few still here, maybe or maybe we're being kicked out, I don't know. But so that's about what we had. If you have any questions, we'll be glad to take them and hopefully have answers to those. And if you don't, we're really glad that you came. That's all we had. Thank you.