 All right, greetings and welcome to a day dedicated to accelerating your business transformation and unlocking the power of cloud-native technologies within your enterprise applications. Today at Dev Nation Day, Modern App Dev, you'll have the unique opportunity to learn from Red Hat experts and practitioners, gain valuable insights into best practices, cutting edge technologies, and innovative architectures, all designed to propel your modern application development forward on the hybrid cloud with Red Hat OpenShift and Kubernetes. This event promises to be packed with information you can readily apply to your own projects, deep dive into learning tracks, all of which are led by industry-leading experts covering a vast landscape of topics, topics like AIML, app modernization, integration, serverless computing, security, event-driven architecture, containers, application architecture, and much more. In addition to the learning tracks, you can take advantage of our interactive virtual labs where you can gain hands-on experience with Red Hat's leading technologies for building cloud-native applications. This is your chance to experiment, explore, and solidify your understanding of key concepts through real-world scenarios. So whether you're a seasoned developer or just starting your journey, Dev Nation Day Modern App Dev has something for everyone. Get ready to learn, connect, and unlock the full potential of modern application development. Stay tuned today and let's embark on this exciting journey together. I'm going to introduce our next speaker, Zineb, who is going to present Connecting Disparate Systems in a Lightweight Way. Zineb, I'll turn it over to you. Yeah, thanks. Hello, everyone. I'm so happy to gather with you today for my talk. So today I'm going to speak about connecting disparate systems in a lightweight way. So let me get to my slides. So basically, I'm going to speak about integration challenges when we are having a system and that system needs to connect to different other systems. And by connecting, I'm speaking more about the application developer challenges of connecting. From an application point of view, because there are lots of connectivity, we have infrastructure, we have a networking, but here is more about application developer. So I'm going to speak about how to connect your application to other application or other resources and also to connecting in the way what's inside. For a certain feature, there is certain data or sometimes content in the data. So this is why I like it more a developer thing. So who am I? I'm Zineb and Hiba. I work at Red Hat since 2020. I'm working on Apache Camel project and I am a Committer and PMC and I work on integration solutions based on Apache Camel. And more precisely, I develop and maintain several Qarkis extensions for Apache Camel. And I've been working for a long time now before joining Red Hat. I was developing application and I was using open source to develop those applications, mostly in Java. So before building the open source solution to integration, I did myself, had some problems to solve in integration worlds. So today, I want to give a talk like when I've about the challenges that I've seen. By the time I was working on big middleware, inside big companies where we were having microservices that needed to connect to lots of other systems. And I want to show you how the open source project that I work with Apache Camel can resolve those challenges. So the first big challenge is to connect my system to those other disparate systems. In order to give you a better understanding of the problem, I designed that small scenario that is a bit some of the challenges that I've already seen. But just like to give you an idea of what I'm speaking about. So you just like suppose that this is my system, that we have like lots of microservices that here I described by service one, service two, etc. And some of the more microservices have their databases. We expose a REST API for the rest of the world. And we use Kafka to get messages between our microservices, but also in case we needed to connect with the rest of the world. So this is like you can suppose this is my team, this is our system. And one day we have some VMs that come and say, hey, we need to connect with another system. So integrate with that system. And when we start actually discussion with this other team that's here, I say they have an external system A. We see at the very beginning that they do seem to be like the same architecture. So they have lots of microservices here. So maybe it should be easier. But then when we start like digging, we see that they use like complete other technologies. So they do have like their messaging and databases in the cloud. And they use GraphQL, not REST API. So at that point, if we need to integrate those two systems, like developers from one team or the other, we need to understand one of the technologies of the other team so that we can do that exchanging data between the teams. And suppose in that example that at this point, someone in my team is really interested into learning GraphQL. And for one feature, we can say, okay, let's do it. But more time comes, the more we have to connect to other systems. And picture this one. This one exists a lot in big companies like those legacy system. So we need to integrate for lots of services with a legacy system. And that legacy system is a proprietary system. So basically they cannot change their code. And that legacy system is old. It's not going to change. And it's going to stay there for years. And there are two ways of exchanging data. Either we already have some IDs that we know from this data, and we want to get the updated version, and we're going to use SOAP to get those messages. Either we will receive all the new data from that system that we need to perform some features in our middleware from FACTS, because this is the only way that legacy system can push data. And those files will come all day once a day, for example, via FTP. And so at this point, now one for example in my team is interested to do SOAP. And also we start to have lots of technologies to learn for a very small team. And also they say like FTP is going to be there like just for a few months, because we will start to migrate to Amazon S3. So at the beginning, the files you will receive them for FTP, but just like you know, in a few months you will receive them from S3. And then also like there's this receiving files like once or twice a day. It's not ideal in a world like where we need like information ASAP and we are in that big connectivity. So they've been looking to their system in a way that we can get this data and you say look, there is a database, a scale database, we can access to it from that proprietary system. And if we can like capture the data changes there so that we can have the information on the flight that is very important in order to do some operation in the cloud system, the cloud world of today. And then we try to integrate more and more. We will need to send notifications for example with Slack or sending emails and etc. So you know like all those challenges that can have like a team that is kind of middleware in a big company and to have that exchange in data. Apache Camo can set you free because instead of like learning all those technologies from those external systems, we can like use learn only one framework which is Apache Camo because Apache Camo has like more than 350 connectors. Most of the connectors that you know like all the technologies that we've seen here in my team and other teams they already exist. Here for the database what I want to introduce you if you don't know like there's a project like named division that can do capture data change and we have also connector for that. So you can actually connect to almost everything and in case you have like some system where you have like your own API or something you can also create your own Camo component like just for an advanced system. Also the interesting thing this is my point of view regarding the experience that I had personally when we had to migrate to lots big migration of transformation of information system is think of it also of a unified way to do your connectivity between all the systems that you have inside your company or even with outside because the problems that I've seen personally when those companies get bigger and bigger and they did acquisition is that moment when we need like for example to transform one model to a microservices at some point after 10 years we need to transform some system. If the connections were inside the database and there is like no way to show that it's an integration between systems it's it's so complicated to have that bigger view of transformation of all the needs of where data is who connects to what where those messages come. So if you have like a unified you can say like let's use one framework everywhere and also be specific about how those connectors that you're gonna do will be observed you know in Apache Camo I don't have time to show this today but you can have observability in everything so you be consistent about all your connectors when you have like a big company can can help you better observe all those integration and also to have them visible in order to say this is all the connectivity that I have with one system so if we want starts to move that system we know which connection we have with whom we need to communicate and to change the way we communicate so that is very important one of the biggest challenges that I've seen when we like need to do those biggest transformations. So what is Apache Camo so I said is an open source integration framework it's part of the Apache Source Foundation and it's today the biggest open source integration community. It's a project that started in 2007 there is like lots of contributors and that project actually got bigger and evolved very well and offers you know cloud native environments. It implements most of the enterprise integration patterns that have been described in this book. This book actually have the those best practices of all those problematics that we see in integration and today with the evolution and the cloud and having API everywhere we do have more and more integration but those challenges of integrations regarding the technology they are always the same so it's cool to have them because most of the time what we see is that people do reinvent the world. So Apache Camo it's an integration framework that can connect data and exchange data from anything to anything it has like lots of things and you use what you need and I said that the talk today is how to do it in a lightweight and also in an easy way so here we have like the description of one integration with Apache Camo. This is what we call a camel route and it will from will get you what is the source from where we will get the data and two is what is the destination. So if we put this camel route in an application using Apache Camo it will start an integration that will keep going into Amazon S3 here sorry I went very here with this bucket name and every file that it will read it will take the content and send it to this HTTP endpoint with this path and here I represent like there's Java there's a missing here Kubernetes I missed it while copy-pasting but we have like Java and Kubernetes environment and this is a Java way of writing the the the data so Java DSL but just let you know there is YAML there is groovy there is XML so there's like lots of way to do this and just like what a camel does is that these are the external resources and inside like there's going to be a whole mechanism that will be handled by camel in order to send the message and reply to to to the source in case we need like to reply and also it will it will do under the hood it will actually use the underlying APIs to connect to those different resources and here if we go back to the example I showed that we need to integrate with ftp for the legacy system and at some point we need to to move to S3 so for example in our microservices we can create the data put it to Kafka because we have Kafka and create an integration that will pick everything going from that Kafka to ftp if at some point we need to move to Amazon S3 I just need to change just one line in my code so it will save you a lot of time to just rely on Apache camel and if you are interested in in show in getting to the code it is open source if you need to debug you can debug because everything is open source but yeah you can actually reuse what is already there and without reinventing the wall and if you have like lots of integration the moment you start knowing Apache camel it's very easy to create the any other integration the second challenge that we see all the time is the data transformation because generally we get data from a system in their formats and we need to transfer it to another system in the format that is needed for the other system and the format can be that we are having complete different ways of having data but we can also have the same the same technology of of designing the data but the schema can be sometimes different so we have like lots of things we have lots of camel data formats so we can marshal and marshal and it's very easy from from the route we can like translate by using like information from the body or the headers of the exchange or we can use a data type we can we have some template based components for example here we can use an xsl t model in order to transform some xml to another xml and we can use processor inside the route and and create our our own code here we we get the body we get the headers and we do transformation with it and we can simply use a bean which is like outside we have the java bean and we give it the bean at some point in the route and it will change with it will send the the body to that method and the resulting information from the method will be the new body every step in the route by the way like every time we change the body the resulting body will be the entry for the next step and also it's a way easier to do the counter entry and richer meaning that like we won't go from a system a to system b but when we get the message from system a we want to go to another research c in order to enrich that message to send it to the the system at the end and the third challenges is the routing message so we call them camel routes because camel will route a message from from one system to one or different systems and we can have like here we we we enter the enterprise integration patterns for example here we have the content based router so we will route the message depending on information inside the message so either the body itself either the headers and it would be when sent here otherwise send here and we have the message filter meaning that we need to process only the messages that will contain this and for the recipient list is a is a nice example like you get one message and send it to a different receiver the same message would be just like one line here having all those receivers you can load balance if you have like camel the same camel end points you can say a load balance and this one is the idompetent receiver so it's not just that easy it's a bit more advanced you've got to see them the the documentation but and it's implemented in in lots of technologies like databases catching system Kafka etc it's a way of making sure that in case the sender needs an ack and if there's a disconnection to get back that ack for example to have a way to to to make sure that we don't process the the same messages with the same information for example id in in this in this example and to let camel deal how to to to deal with that information and to make sure to record those IDs and record if they were used or not and and to to let camel handle if we continue process or not and there are lots of EIPs so just like as a as an example there's aggregators splitter and re-sequencer so here i'm going to do a first demo um so that demo uh i i will show you how we can do uh all of this with Apache Camo um as i'm working in the quarkus runtime uh i'm going to show you this for quarkus and more on developer mode for demo two i can show you something like at the end of demo two i will show you something a bit different uh so for the the demo i also uh you know i've said like i have microservices and we just like connect with the rest of the world but i will show you that we can do a REST API also with camel and that's REST API with least a coffee order from a database get one coffee order by id and add one coffee order and that's out one coffee order it will have some some some information so that it can translate and generate an s3 file just to to say for example it's there at the receipt and or the invoice and and then it will also send the telegram which will tell the the delivery people like you need to deliver a coffee for xyz um so i'm going to go to my intelligent so the application is is already um maybe i can i don't know if we see it like let me know if if it's okay but i think it's okay so this is the uh a java application it's in a quarkus so quarkus if you don't know it is a java runtime that is tailored for uh cloud native and kubernetes it makes java run uh faster um boot up faster on kubernetes and have less memory um so in order when so here actually i have the camel quarkus for example s3 or uh rest um i'm using the quarkus bosca sequel um but you know like see i have a lots of dependencies on my java application and once i have like a single camel uh component in my uh application all i can do is i extend route builder and uh when the application will start uh camel will go check those and and start them so here i have my raced api uh just you know i have a post a get and another get i've done some html very basic because i don't know how to do front end just to make it uh easy and uh the code is here and i have another routes and this one i made it a bit different it extends endpoint route builder and instead of uh of writing it the way i i showed you here it's kind of like a method like it's in the java way so it's another way actually to write code but what you need to know is all those different kinds uh i always use the apache camel plugins so that it helps me uh when i want to find the properties of certain components and here i have some uh no here so i use them really the the java way so uh i just get the configuration from the application that properties file and it will put them here and uh i'm going to start this application in developer mode and um so with maven and um i will let you so here it's starting progress it's very quick uh but it did um you know and here all there's all this developer joy of quarkus so here i created my uh my uh my database and i did not put any information of how to connect to database so quarkus by using JDBC progress sequel it gives me that dev services so it started for me uh um a docker image with postgres sequel and it connect this with uh with the with the camel jpa um and here i have my coffee order here uh with jpa i'm using jpa here with camel so here i have a named query to get all but here i have like the information to uh to get an id and everything and just in the import that sql i i inserted three values with the the id uh one two three uh so now that the application is started and we see that's it started many routes for me um i can go to um to the application here it's in 8080 and if i uh get all orders i have one to three if i uh get an order by id for example three it would give me three but the fourth one i don't have it so those the the three ones i am going to go very quickly on the code but you can see it i can give the link at the end uh so when i did my um my rest routes i posted to a direct so i did not want to get all the information messages here all the the camel code here uh so i put this direct is like an endpoint in camel that will get the code outside the the the the main uh the main route so uh if i go here to operation when i want to add an order i just say to uh i use jpa and i say i am persisting coffee order and it will go to coffee order and it will find it in the database and we just persist uh that one we don't need to um to do much much code that that and then i change the body of the message so that the the person receives thank you for your order and uh and here if i uh want to get all i can use gpa and i use the named query find all that i have in my jpa class that will will fetch everything in database and when i want to query with id instead of using two i use two dynamic meaning that i will give you an information that is dynamic and i will use here uh the the jpa with a query and my query will use that header that id so that id that i will put in the get will get here in the query and that's basically how this works now when i was here in the add order i did here in uh work top in order to uh do something like outside my route and i do notify for that notify uh i want to send to s3 and notify delivery for sending s3 uh here i am setting i am getting that message putting it in json because i want it in json in my file so here what i said is just i marshal it to json because it's a java object when we have the coffee order we use jpa is a java object so i converted here to json and then here i add the the key which will name my file and here i i get the information uh about the packet name and my credential and to notify to telegram i just need whom's the chat id it's mine and the authorization token to use that token that i will create so just to give you a quick so this is only the code that we need not much than that here is my telegram as you can see here i just have a start so no no incoming messages and this is my uh bucket where i need to get my files in amazon s3 so um if i go back to my application and i add order so here with um i go check uh i here i just call with javascript uh some random api to get information about coffee and uh i use this information to to to submit it and it says thank you for your order here i see the information new coffee order for delivery so i got this one and uh if i go here to my amazon s3 bucket i see that i have a new file and just uh to go back to this one um here when i get the uh add order i use a java bean to generate orders so i have an incoming json that is not a coffee order so here if i go to this bean and uh and i have like uh it's a java object that is coffee and i transform it to coffee order so here i i i get just the blind name uh and the id as a coffee id and the user id here is just a demo i do a random ui id uh so if i go here so here it was the blind name red cowboy if i go to get all orders i see that's uh that new um coffee and it's red cowboy if i am the delivery person and i seen this four so i can go to get order by id and i put four and i get the red cowboy so now i'm gonna add you some uh other challenges um so there's the login is one of the challenges there are multiple ways so here i'm gonna go a bit faster uh there's the the the the log aap but also the log uh components um and the boats actually give you uh ways of logging the difference will be how much like uh further you want to go uh with the customizing your uh your your logs so you need to check uh the documentation to see one and the other and see what you want actually to use uh to use uh better and there is the error handling also here there is like lots of ways of just uh handling uh an error you can just do uh on on error and and have this but i i wanted to show you this dead letter q and and give you uh the same demo but with uh dead letter q and uh here for example it it will be for all the class where i will have my routes and uh i will set a maximum uh number of delivery attempts uh it's the dead letter channel and uh the delay and if uh the the the read dearly are exhausted it will go to a certain end point and here and here comes my demo too so my demo too what i want to do is uh i'm gonna fake the delivery to telegram what i'm gonna do is instead of giving my real authorization uh to connect to telegram i will uh give it a damn uh authorization so telegram would say i don't know which port you want to connect to so it will fail to deliver to telegram and just in order to show you uh how it works it will uh just for the demo we will imagine that we all those messages that are not delivered we want to put them for example in a kafkat topic so that uh so that we can like do it manually or figure out a system to uh to try them again when once we are not but once we can connect again to that service so i will go back to the same uh demo and here i will roll back just to add some uh information in here so here i am uh getting that uh error handler here and in the error queue i will tell you know like if uh you don't so if we are exhausted here here i get the order id from the header so i transform my body to just a simple text that says uh this is the order id that you need to uh to process and we will send it to a kafkat topic that is named aerotopic and here it's not gonna work because i will need here the uh to add um the kama quarkus kafka and again it will do the same thing here it will start for me a kafka and uh as you see i did not stop or start so because i am in developer mode so quarkus will restart on the fly and what is interesting is that when we are on developer mode we have a dev UI and we have something for Apache kafka and here for example if i i click on topics i have zero topics because if i had like a camel consumer if it consumes from a topic and that topic doesn't exist it will create it but here we will produce only in case we can fail to uh to to get to telegram so at that point because the topic does not exist it will create this one so here it started also for me a pod for for Apache for a for kafka but for now my code is still okay we can try it we can try it out again so here you know like it refreshed my database because it was just a docker image it restarted so uh we started from from the beginning with only three coffee orders so if i add an order here i will fill this time it's goodbye mug submit if i go back to the order i have the goodbye mug here i received again number four so if i go to my dev UI here and and refresh because it was delivered to telegram i don't have any topic here in kafka so now in order to fake it i'm gonna go back here and look at my at my authorization token here and instead of on using the one that i have in my properties here i will say random tests i will go outside let it do the life reload so here we have a library load just in order to make sure here i am going to clear the history here with my bots if i refresh here uh you know the application started we have another docker called the docker image that started here so i just have three i'm gonna add another so this time is post modern uh pie so i'm gonna submit this one get back here i have my and as you can see here i did not receive that one so if i go to the error here i have error happens sending notification to order four so now that is interesting we're gonna see if this has made it to my uh dead channel queue so here i have my error topic that we did not have and if you can see here i have order id for so uh that's just one way of handling errors in Apache camel you have lots of ways and that one is interesting because i've seen uh reinventing the wool around this one but just you know if you use Apache camel you can do this mechanism put the data in another channel so that you can process this uh another time later with another mechanism i want to show you something because uh we created here lots of lots of documents and i want to show you something different than java because i am a java developer we have in Apache camel something that is named camlots and i don't have of course the documentation here so where is the page if i go very quickly i think maybe here camlots catalog so camlots catalog is a way of using simple source and syncs and put them together without you know having the without like learning really the Apache camel DSL so here for example we can use the amazon s3 source in order to get everything from Apache camel source and all you can do is do a yaml file and and and you can use it like that so at the beginning it was really designed just for kubernetes but we made it work for it with java and i can even like show you this on my machine so i'm going to go here and show you here i have uh with camel jbank is a is a tool that we have and here i am going to use the amazon s3 source the camlots one that is available in our catalog but you can you can you can create your own bits but this one is available and i give it the properties and i use a sync so i put here in the read me the way actually i generated the one with the credential i don't want to show you my credential but i can use the clea to run this one and when it will because i did not give it the properties it will read those files that will disappear and it will log so because i have a source s3 and a sync log so i just want to show you this how it runs in my machine and i will show you how it runs on kubernetes is if everything works okay so i just say target and you can also run with that camel clea any any file it can be java groovy yaml dsl xaml dsl or this camlots way of of doing that yaml hopefully i did not so here it will start and here you know see it read three files because i had three files in my amazon s3 so i had the first one here the second and the third if i go back here i don't have any message anymore so i can for example i need to create another one add an order here fill with random submits and if i come back here you know i have three i have the fourth one just uh to show this one is mount uh no it's regs java and here blend name is regs java so this is the one now i'm going to stop this one because this one is just the the the file that's you know like if you want to deploy this on kubernetes which i i want to do and it will show you you can use this on your machine using a clea that's basically will will use java to to run the camlots and you can even use a bit of camlots by using the camlots components in your own code if you have like a camlots design the camlots that do lots of things and you just want to reuse it as is so uh hopefully um i'm always connected to my yeah i am on the right uh namespace so here what i'm gonna do so this file is just like here i did take my file and i put the environment variable but i don't want to show you what is in my target but uh and by the way this code is on is is i'm sorry for this read me that is inside the j-bank but it's just like to to show you how you can put your credential so here i said kubectl apply and it shows up here let's see i have another clea that gives me uh the caml so it's the uh the camel clea for kubernetes and if i log this one for now it will just see actually it was just the file but it started like a camel quarkus because the the the operator we create actually the there's the camel k operator installed in my kubernetes environment here is open shift i'm using by the way the developer sandbox so that you can use for free um the red hat developer sandbox and uh so here it is login so i am going to go back to my uh to my uh to here i will fill so this one is blueberry mug i submit hopefully it works and here as you can see it's blueberry mug so this one was the same file and now it's running on my kubernetes so this is just to uh show you a bit of the the latest things that we have here that we don't even need to create an application because it's the the installed operator that will transform it to camel k that will transform it to a camel quarkus project you just need to create your integration you can do this also for what i've shown you before in the code just the java dsl but you can use those bits that are reusable that exist and that are and that you can use as source and sync uh just like that you know so that was like hopefully a cool a cool demo and it's actually the end of my talk i will give you here a keyword it's uh where you can find the demo and uh end the code uh for this and uh greg i don't know if i'm over time or if i needed no we've got a couple of minutes i saw a couple of questions in the chat if you have a second yes um i'll go in no particular order um one one uh question was can we change thread on which camel workflow should run or does it run on quarkus main thread or event loop uh the thread uh i would say i don't think so but i i think uh the best is to um to ask this question again in our zulip chat if you go to apache camel you can see like the the link to our zulip chat but yeah i've i've never tried it so i'm not totally sure but if you can ask this on the on the camel quarkus channel for for our zulip chat it would be it would be great all right all right and a couple more um can camlet be used without docker kubernetes as a standalone app as a standalone app uh not really but uh what you can you know for example i was like from amazon s3 from uh to ftp you know those are uh components there is one that is named camlets and you will need to create you know your your uh java camel uh code and and you just like use the camlet component uh and you uh link it to the file uh within the camlets i think this is this is how it works today um because the the key that i've that i've shown you it's just like to to run uh as a script it's jambank and uh and the kubernetes operator like creates an application but in java you know you can you can use it but you use it as a camel component so you need to to say from camlets this and to come you know this is like something like that so you can you can you can reuse the camlets in uh basic java camel and and run it as a standalone the the the camel java knows how to run a camlet okay well i hope that answers most everyone's questions there was a couple of other questions but i i addressed them directly in the thread we should be um the recording should be available uh sometime after today's event but it might take a week or two for everything to get organized as this is a fairly large event uh and so you can usually find those on our red hat developer youtube channel once they get posted um and thank you everyone for your time thank you and uh don't hesitate to come and speak with uh with the camel community if you have any questions bye bye