 Hello, everybody. Thank you for coming to this Cloud Native Computing Foundation online program through CNCF. My name is Daniel from DataGrate, and today we're going to be talking about creating a serverless IPass in Kubernetes with Apache Camo K. We're really excited to talk about this topic with you, and hopefully you follow along and you learn something about this topic. So, just to kind of briefly cover what we're going to be talking about today, we're going to be talking about building your own IPass, why you want to do that, what kind of advantages it brings for your developers. We're going to be introducing Apache Camo and Apache Camo K. Well, then we're going to talk about building a serverless IPass on Kubernetes through Apache Camo, and we're also going to show you a brief demo to show how that happens in real time. So, just a little bit about us and what we do. As I said, my name is Daniel. I'm the Marketing Manager here at DataGrate. We also have our founder and CEO, Andre Shlushka, who is going to be taking you through the demo portion of this program. And we at DataGrate, we help people to build integrations on Apache Camo. We've been doing that for a number of years now. We built this tool, Jettak, to help people easily build integrations, cloud native integrations through Apache Camo to easily get things up and running, to easily build APIs, to connect their infrastructure together based on the patterns and connectors that are implemented in Camo already. So, today we're talking about IPass specifically, integration platforms as a service. You know, this toolset really started to enable the citizen in the ad hoc integrator space to grow rather than just have it be strictly for technically inclined developers, integration specific developers. It allowed less technically knowledgeable users to create integrations that their enterprise is needed. You know, developers that maybe have less specific training in building integrations that are a little more general use developers. You know, these IPass platforms give them flexible platforms usually with low code or no code development to help people who again, maybe aren't as technically inclined on the integration front to still create real time cloud based integrations, third party integrations as well, connecting their legacy or their on premise systems onto modern cloud environments, not specifically designing them for IT and development teams making them more accessible than say the enterprise service bus model. You know, connecting these disparate systems together is one of the major business functions, creating and managing APIs to pull and sync whatever data from one area of your infrastructure to another, doing this all in real time to pushing or pulling data from your sources and watching that data flow between integrations as it happens, transforming the data as well, converting it before it hits its target based on established data transformation patterns. And again, prebuilt connectors for any common integration use cases. So the integration developer really is the chief operator here. And if they can't get integrations up and running quickly and efficiently, then your integration project will never be able to get off the ground. So we're really talking about IPass for developers here gives the developers the ability to deploy and to manage these complex integration streams, connecting parts of your company's critical infrastructure together as you need it to happen. Using these prebuilt integration templates to connect specific third party applications, or to send pull and push requests between, you know, whatever server or know hardware you need to you need to connect to, you're essentially creating an ecosystem through which all of your business applications can automatically receive whatever critical information it needs in real time, no matter where that information is coming from. And you know, when we talked about IPass, we were talking about streamlining integrations for engineers, putting the control and the operability back into their hands, giving them the control they need over their routes and their API's. You can modify paths between integrations and implement your connectors and patterns as you see fit. And giving you the freedom you're giving your developers the freedom for customized deployments handling different types of integrations and, you know, different integration functions, again, based on these prebuilt resources, deploying them wherever they need to go developing routes that fit into whatever environment you intend to deploy to and configuring them automatically configuring those routes and integrations to again, whatever environment that you need to run them in. Now in talking about the framework that you want to build, you know, an IPass solution on, well, that's where Apache Camel comes in. And Apache Camel is a Java based integration framework based on established integration patterns from the book enterprise integration platforms by Gregor Hope and Bobby Wolf. Has supports for numerous DSLs and data formats as well, XML, CSV, JSON, YAML, et cetera, for receiving and sending specific information, depending on what kind of data you need to receive. Operates on a messaging system, you know, this messages contain information that are determined by the sender, you know, it contained in a header, body, ID, timestamp, whatever information you needed to deliver, sends that to the receiver. Traveling from endpoint to endpoint through the camel context runtime, which we'll kind of we'll dive into that in a bit. That's where data is run through the processes and components before it's delivered to its destination. So let's take a brief look into the camel architecture itself against all based around that camel context runtime. And the core aspect of that is the routing engine and that connects endpoints to the camel integration platform as well as connecting endpoints to each other through DSLs. Again, operating on that same messaging system. And that's where the processors they're performing tasks like routing, transformation, validation, et cetera, through these enterprise integration patterns. And then you have camel components, which that's what allows camel to connect to external systems such as HTTP, FTP, JMS, DNS, GraphQL, IRC, Amazon web services, ActiveMQ, there are a lot of different components and camel can essentially use those to connect to third party systems. Now, when it comes to deploying your camel integrations on Kubernetes, that's where Apache camel K comes in. This is the branch of choice essentially for deploying your integrations on Kubernetes. So it's really what you want to be looking for if you're deploying, if your deployment environment is intended to be a microservices or a serverless deployment through native or OpenShift, et cetera. Apache camel K automatically configures your integrations and whatever resources you deploy on Apache camel K on your Kubernetes cluster for this serverless architecture. There's essentially no need to worry about setting up your Kubernetes deployments each time. It's creating a local cluster. Now, of course, when we talk about building a serverless I pass, you know, it helps to understand why, you know, what is the benefit of serverless for integrations. Of course, the serverless architecture, you are building and running services and applications natively in the cloud without having to worry about configuring or managing servers because the server maintenance is all handled by the cloud providers. I know examples of this include AWS Lambda, Azure Functions, Google Cloud Functions. And since the server functions are abstracted away from the developer and from your enterprise, it lets your development team focus on coding rather than on complex server operation. You're essentially where this is working natively with your cloud environment of choice wherever you're deploying it to define and execute the functions that you specify through APIs and microservices. And with again, with less time spent working on server upkeep server maintenance, making sure that your environment is up to speed. That means you have more time available for developing the complex APIs that you need to create to keep your business up and running. So in terms of then, you know, why should you go the serverless route for your developers for your integrations? What are the benefits then of serverless deployments for your businesses and for your developers? Well, obviously, as we've talked about, it reduces costs on server management. Again, there's no need for you to manage the physical infrastructure required for all this data storage. It's all done through your cloud provider. They're taking care of that. Because of that, because your cloud provider is providing these resources, of course, that means on your end, this is scalable. It's a scalable deployment that grows and shrinks alongside your iPass, optimizing the cost and resource efficiency on your end, while still giving you that event based agile architecture that serverless provides. And with higher availability and observability across your infrastructure to boot, it's taking advantage of this cloud based data storage to ensure that no matter where you are, and no matter when it is, you can access the integrations and APIs all through again, just directly onto the cloud. The issue, though, is that your sources, no matter, you know, where they are, and if they're being delivered to the serverless deployment, they're still writing data in various formats, and your iPass and your target destinations might not be compatible with those file formats, without a common framework that can accept inputs received as messages from your sources, and translate the information from those sources into a desired format so your targets can understand them. Without a common framework to do that, your developers have to put in the work to manually extract and push that data. And that's why a system like Campbell provides such a benefit. It creates a system where data can be transformed and sent to its intended location, so it can understand what the core of the message is. Okay, so now that we've gone through the fundamentals of how camel and camel K operates, and also explored why iPass is so beneficial, and the benefits of serverless architecture for your iPass. We now it comes to the question of, okay, how do I get started building and serverless iPass? And how can Apache Campbell K help me accomplish that from setting up the environment to building the integrations and APIs to testing your routes running your projects, essentially outlining the necessary steps to connect to that serverless deployment as well. And this is where we're going to segue into the live demo portion with Andre. So Andre, if you'd like to take it away. Yeah, hello, everybody. Thank you, Daniel, so much for this for this topic on iPass. So my name is Andre. I'm the CTO and co-founder of DataGrate. And what I have for you today is a demonstration on how to build an API using the Apache camel K framework and get that deployed into Kubernetes. So before we before we get started, let me just from a developer standpoint, guide you through Apache camel K what that is and how that actually works. So the usual approach to do some some software development using Apache camel would be to have a development environment, something like Eclipse or IntelliJ or Visual Code, and then build a Maven project, build some Spring Boot microservices, or however the approach approach might look like. Now for camel K, this is a little bit different, because this is what we actually need is only the actual DSL integration in camel. So instead of having a full project, the only thing we're going to focus on is the actual DSL code. I guess that's what you would call today, low code approach. So instead of taking a lot of time to build CI CD pipelines, build a full software development lifecycle, the only thing we really need to focus on is the actual DSL code. So at the end of the day, we really want to spend time on building the integration instead of building things around it. So how is this going to work? Essentially, what we have is we have the camel DSL. This is something we all know already for Apache camel developers who are new to camel, you might want to look into the camel DSL. So what we write is the camel DSL. And essentially, we're going to throw it over the fence, as I'd like to call it into the Kubernetes cluster. In the Kubernetes cluster, what we have is we have an operator pattern. The operator basically waits on us to deploy an integration. And what that means is we're going to end up having a DSL, which we hand over to this operator. And then instead of us dealing with how can we build a Java file? How can we get to an executable? It is the duty of this specific operator to build all that stuff for us to build it correctly, and to get it deployed as a pod in our Kubernetes environment. So if you look at that picture, it's basically before we had camel K, we had complex code, we had like lots of Java files we needed, we had configuration files, in many things, we had to build this pipeline, using cloud tools using Jenkins, whatever tooling you need. And then we had to get that deployed and monitor that somehow, which was a lot of work. So now with a camel K approach, the idea is again, we have one file which only has the DSL. So really, only what we need, we then have one command line, which is a tool provided by Apache camel K, which allows me to essentially interact with the operator. And this will create one pot. So it'll make it really easy for us. So enough talking, let's look into that. The question might be okay, how can I, how can I work with that? So we hear data grade, we obviously use our tool Jettick to do that. But I want you to know that what I'm doing here is really something you can do with any tool, only thing you will need is a text editor and the Apache camel K command line. I'm going to skip that for now. And I'm just going to walk you through that in in Jettick as we go. So the goal for today is we're going to create a new integration. And what we want to do is we want to call it my first API. So my first API, what it's supposed to do is it's just going to be exposing an API on our Kubernetes cluster. And it's, it's supposed to read a file in our local hard drive, in this case, in the Kubernetes cluster, which is a JSON file, and it reads that JSON file and that JSON file is just like a user database. And it should just respond, it should just search for the user and then respond. So that's, that's a typical use case, we want to build an API which grabs some data, converts it to JSON and sends it back to the caller. And what I'm going to show you is how you can do that in basically three, maybe four lines of code. That's it. So the first thing we're going to need here is we're going to need to have an API. So I have, I have done a little bit of homework. So I've created an API already in the system. Again, you can come up with any kind of open API specification or whatever. It's really up to you. So we're going to have one REST API get call. So we we're going to expose our API. In order to get that to work, there is something called camel K trades. And we're going to have to activate the so called ingress trade, we have to enable that and say, okay, which is the host. In order for that to work, you will have to have an ingress controller installed in your Kubernetes environment. That is something the cloud provider usually has ways of dealing with that pretty quickly. So I have created that already. So we have the ingress enabled, we have the we have the REST API. And what we're going to need here in this case is we're going to need our our JSON. This is this one here. And this is just a very, very simple JSON it has two users in here user ID one and user ID two. That's really about it. And what we're going to do is we're going to we're going to create a new resource. At this case, I have pre created it already so I can use that. But really, this is just the content of our of our resource file. And if we instruct camel K to have a resource file, what it will do is it will simply put that into our pod into the folder slash Etsy slash camel slash resources. So this file will just end up being on our pod. That's about it. So what we're going to do is the first thing we need to do here is we want to have we want to read. We basically want to read this file into our body. So we can use the simple language for that. And we just type resource file, Etsy camel resources. And I think I called it users JSON. So the only thing this this will do is it will read the file. And in the next approach, we have to use a JSON path. Because remember what we wanted to do is we wanted to grab one user. So either either this user user or the other user. So I can do that. Again, with a body operation. But now I'm selecting a JSON path. And again, I did a little bit of homework. So I prepared the JSON operation, the JSON path. And with this will do it will basically look up the user ID, which is coming from my rest API call. And I'm going to explain what that means in a second. So we've read the file. We then we then grabbed the correct JSON path operation. And the only other thing we need to do essentially is we need to convert that into a JSON. So there is a martial operation in camel available. martial JSON. And we're just going to instruct it to make sure that it's a proper formatted JSON. Alright, so I'm going to run this. And what we'll do is what I was just talking about in my in the PowerPoint presentation is it'll it built the built the code now puts it into the operator. And the operator will now start working on it, making sure that our pot gets built that our integration gets built. So we can monitor that here. And while this is while this is booting, I'm going to have a look into into the actual code what it created. So this is basically it. This is basically one, two, three lines of code. We're going to set our body. We're going to read this. And then we're going to convert that into a Jackson. Think it didn't store that. That's all we need. Make sure that's boots up properly. And in the meantime, we can already see that in our in our Kubernetes environment, it is already coming up. So it's already built. It is it's essentially there. So it just takes us 30 seconds, usually only 20 seconds, maybe 15 seconds to boot this whole camel K integration. And if we now call it, we'll have to give it a user ID, then it responds with the data in a nice beautiful JSON format. So that's about it. That's really all it takes to build, build a small, small integration. So let's have a look and revisit the code one more time. It's just one, two, three, four lines of code. It'll it'll basically uses the simple language to read our file. It uses JSON path to grab this specific data from this file. And it then marshals that into a JSON format. This is really all it takes to build a small API. And that's that's the beauty of of camel. It pre builds the English controller or the English description for us in a way we need it, we don't have to care about that. It builds the the service for us. So we really don't have to care, take care about anything. So this is basically how how the YAML file would look like. But again, nothing you have to worry about, it is all being done by the camel K framework. So yeah, guys, that is about it. I hope you enjoyed the demo of Apache camel K. And you get a sense of how quickly you can build an integration. And this is this is just just the beginning. So really, the Apache camel K framework is so powerful that allows you to build a whole middleware integration and I pass around it. If you have any questions, please feel free to reach out to me or my team. We're always eager to help you guys on any Apache camel K or Apache camel integration. Thanks so much for joining. And we hope you enjoyed it. Have a wonderful day.