 Well, I can start by saying Phosphorgy Calgary. I guess you have seen the stickers there. You should be there. And if you like the Phosphorgy, you're going to love Phosphorgy. And this is not what I'm going to talk about. And I put this shirt because I saw this morning some QGs. I wasn't here, but I saw on Twitter some QGs thing that is very similar to what I'm going to say here. So I didn't want to forget that if whoever did these QGs like a scratch for QGs, I don't know if that person is here. I'm interested in that because maybe that can be plug-in for this. And this is not what I'm going to talk about. So this is what I'm going to talk about. The what? No. I still have another one, but no. We still have three minutes, right? I should be precise. I don't know how many t-shirts you have left. No, no, no. I wouldn't find you to keep at least one more. I just wanted to not forget about talking about Calgary and QGs. Maybe you could start a little introduction of yourself as a professional. OK. Hi, I'm Maria. Until November, I was the president of Osgeo. So maybe that ring a bell for you, Osgeo. In June, I changed my career, and I'm working for Red Hat. So I start working with Middleware, which is not geospatial, but when I started working there, I realized that it has to do a lot with geospatial in the end, because the integration processes are about data workflows moving data from one place to another. So it just makes a lot of sense to do it with geospatial data, because in the end, all data is geospatial. Remember? Still two minutes. How many what? If not, that's not a camel, it's a dromedary. Or a llama. A llama, how is it called in English? Or a horse. If it doesn't have any, it's a horse. A donkey, that's right. Two minutes. Apologies, is still watching paint dry? Or hearing it dry, maybe? I was thinking on future versions of this talk, I'm going to bring a camel puppet and some bricks to show, but who are you? Well, I don't tell people. But no, I'm Mark Frumontz. And I'm sort of running in open geospatial circles for already now 20 years. So I'm. Phosphor GNA. Yeah, I've organized the Phosphor G in America. Contributed to the Golden Constitution as a local organization. Had my own company, sold it off, which did open geospatial services, products. Work, since then, as a consultant, when I want, where I want. That suits me very fine. Well, thanks for the introduction. I'm a security agency specialist myself. Sorry? I'm completely new to geospatial. Then you should go to Calgary, Phosphor G. I'll see you in 20 years. OK, the floor is yours. Hi. So hi again, I'm Maria, and I'm going to talk about integration processes, which is a bit of camel breeding. So what am I going to talk about? I'm going to talk about open source integration frameworks for E-PASS and hybrid platforms using IIP, which translates to a lot of passwords. It's framework for data workflows, which is, again, a lot of passwords. So it's just moving data between components. So what does this mean? This is when I take the puppet. So you have different components, which may be databases, APIs. Maybe you want to take some file from some folder in your web tab, or you have Twitter, Salesforce. And you want to have components. And you want to connect them and try to get data from A to B. The response that comes from B when you have A as an input parameter, go to C, and then get the response from C. So how can we do this on a good way? Because usually what you do when you have different components and you need to interconnect them is just you write your own script or piece of code, and then you have to maintain that piece of code, which is in the long term or even the short term. That's not a good idea. So how can you do this on a decoupled way when what you are trying to do is couple of different components? That's where the integration frameworks come into the problem. Because you can use an integration framework to be able to integrate the different components on a very decoupled way. So you don't have to worry about the code that is running and moving the data. You just worry about the inputs and outputs of each step. So does this mean that an integration framework is just putting data online and moving it from one step to the next? No, you can also have conditionals and say, OK, depending on the response from the previous step, I'm going to choose if I want to maybe send an email and a warning to someone or maybe I want to store something on the database or maybe I want to do both depending on what happened before. And maybe later I want to aggregate that different flows of data. Or maybe I want to know this is what is called enterprise integration patterns. If you are into software engineering, you should have heard about software design patterns. Well, for messaging and integration, we have a similar thing, which is the enterprise integration patterns. There is also a famous book with 15 integration patterns that are like this, conditionals, broadcast aggregations, parallelized things, serialized things. I don't remember all the 50 of them. So what kind of software can we use that implements these enterprise integration patterns or similar patterns that we can use to move data from one place to another, which is the integration frameworks? One option is Apache Camel, which is my shirt. This is from the Apache Software Foundation. We have been on a stand, both at the Apache Software Foundation stand and on the bottom floor on the K building for integration processes pattern. And it's open source. It was the most active project in the Apache Software Foundation from last year. And it's old, 15 years old or something. We also have spring integration. Spring integration is very good if you are working with spring. But it's true that it's not as complete as Camel because it's newer. So it doesn't have as many components as the rest. Also, it integrates very well with spring. But if you are not using spring, maybe you are not even using Java. This is not what maybe what you need. There's also Mule SB, which is partly open software, partly restricted license you have to pay for to the company. So yes, it's an option, not the one I would chose. And we have Syndesis, which is for non-developers, managers, because it's like a user interface on top of Apache Camel. And you can do the same things you do with Apache Camel, but without writing any line of code because it's just drag and drop clickity. And I come from Red Hat. So I have to mention that, of course, all these things usually have companies behind that gives you support. And some of those companies, like Red Hat, has Fuse, which is Apache Camel, Syndesis, and some other components that are open source. It doesn't have any extra feature that is not open source. And they're just deploying and giving support. So what I'm going to focus is on Apache Camel and Syndesis, not only because that's what I'm working on, but also because I think they are the most complete and at the same time open that there is right now out there. And I really dig a bit, and it's not seriously. If there was some other competitor that was good enough, I would show it here. So as I said, Apache Camel is one of the most active projects in the Apaches of Art Foundation. It has 325 different protocols it understands, so it can be doing an HTTP code, Salesforce, Twitter, Google, Sheets, Amazon, DynamoDB. It runs over Java, but you can define the roots. I mean, what you do is just define a root, all the steps. Each step is like one, two, three lines, which is, this is a database, these are the credentials, this is the SQL, or this is Twitter, this is my token, and all the things Twitter asks you to do some queries. So it's very simple. You don't have to worry just the custom parts that you have to tell because this is not intelligent. And you can define this in JavaScript, in Groovy, in XML, of course, in Java in, so it runs over Java, but you can communicate with it in different languages. And it works in plain Java, it works with Spring Boot. So if you were thinking that maybe I should use Spring integrations because it integrates very good with Spring, Apache Camel also integrates with Spring, and now we have Camel K, which is like a separated project, but it's going to be the main project for the next major version, which runs over Kubernetes, so it talks Kubernetes. If you don't know what Kubernetes is, it's all these Docker container things. So instead of having to run on plain Java, you can define the root, give it to Camel K, with, or also described as K-mail with K, and it just creates the container and runs it, and it's very, very fast. If I can do a demo and it doesn't work, you will see it later. And of course, it talks Kubernetes, it also runs on Quarkus. This is not only a connector library, this is a framework, and when I say a framework, I really mean it, I mean it's something that allows you to really decouple from what you are trying to connect, and you don't have to worry just about giving the details about where do you want to connect, how do you want to maybe convert the output of one step to the input of the next, because maybe you are not using the same data models, of course. Maybe Twitter is giving some data model that is not what you want to store exactly in the database, so you can map that, you can do small processing, and all this in five megas, which is very likely, wait. So how does this work internally? How is it possible that we can decouple something that is coupling components? Well, so we have two external systems, we want to send messages or data from one system to the other, back and forth. For that, the framework has a router that takes each message and decides where it is going to send it. The router to interact with the external components need the consumer and the producer. The consumer consumes messages from the external system, understands it, communicates and understands, send it to the router, the router decides where is it going to go, and then there is another part which is the producer that understands how to communicate with the second external system, and it sends back and forth the messages. I call this, I call exchange, the data we are sending between one and another, because that's what we are using, the glossary we are using. Of course, the router can also decide if there is some data mapping that has to be done because you define it, or some type conversion. The type conversion, you can define the type conversion like I want to convert from XML to JSON because even if I'm working with plain text, that's what I need, or it can be an automatic conversion. For example, if I'm reading from a file, probably I'm using a buffered stream reader, blah, blah, blah, and then I want to send HTTP connection which is another type of stream writer. This is how it will look longer workflow with two steps, and as you can see, the producer and the consumer are fit very well together. Why, because what the producer produces is what the consumer understands, which is called an endpoint, so it's like this. You may notice that the exchange A is not the exchange that gets to the endpoint D, I don't know why I call it D, because every time it gets through an endpoint and it connects with some external system, it gets the data and then have a response and it's the response which travels forward. And we can have a more complex system and as you can see, the thing that arrives to the endpoint at the top is not the same at the bottom because it goes through different endpoints so the responses for each step are different. This is just what an exchange has, just mentioned that it's very similar to an HTTP. So it has headers, it has a body, it may have attachments as files, and if there is some error, it has a place to put an exception. It has a unique ID so you can see how is it flowing through the workflow. And I'm going to try some demo, let's see if it works, because every time I close the lid, it breaks a bit, so why? So this is how a root is defined, so this is a very hello world thing, it's just going to take every five seconds, it's going to look hello for them, send to a mock endpoint because we are not doing anything and then log up, how are you? So if I copy this and paste it, why not? Yes, this is JavaScript, by the way, so you can see you don't need to use Java. And then I run this thing, of course it's not there because it's on OPT.com, so again, so again, very good Maria, very good. It's building, the first build, it takes a bit of time because it's downloading things like the kind of end points it's going to use, don't let me here, don't let me here, log camel, no, log foster, it's not there yet. This is wifi, but maybe I don't trust that, so I'm going to continue because it's until, how much time do I have? Not much? Yeah, so it also works with Kafka, but it's not working the internet and I'm not even going to try, but you can get, this was going to get some from a Kafka broker stream, messages, let's just keep it, and this is like a conditional, so depending on what we send, well, the message it receives, when the body ends with, maybe I should make it smaller, when the body ends with exclamation mark, then log something, if not, then log something else. And we can even send it back to the Kafka stream, the message, so it's going to go again through all the workflow because you can decide on the from says from the first step, the two is all the next steps, on the two you can use the same things you used before, so you can create loops, but be careful with that, of course, so this is how you create a workflow. Of course, you don't have to use Java, you can use XML, which will look something like this, this is Spring, DSL, Camel, DSL thing, and for non-developers, we have synthesis, so yeah, let's just show you this, if I can drag and drop, yeah, so this will be synthesis, this will be a hello world workflow, very simple, because it's just, as I did before, a timer with a log, I can add more steps here, I don't have internet, so I cannot show you this, sorry. I will just show you them, so here is where you select which kind of steps you want to do, this will be a more longer workflow with the data mapping that maps like this, like a drag and drop thing, no, I don't know why it's not shown completely, well, it's drag and drop, you have the parameters, the output parameters from one step and the input parameters from the next, so you just drag and drop, which ones you want to connect, this is the main view of the integration you have done, this is the log, so for each run, it generates a log that is beautified, so it's easy to understand for non-developers and no demo time, so as I told you, I talk about an open source integration framework, so you can do data workflows easily, for integration process as a service, which is kind of a new concept, which is this all the Docker Kubernetes containers thing on hybrid platforms, because you can run this, this is open software, but you can run this on different platforms and you can connect even components that are not open source, like Twitter, Salesforce or, I don't remember which one, using enterprise integration platform. Enterprise integration patterns. And I'm trying to work on adding specific connectors for just spatial, because of course, you can use already just spatial data with this, but there are no specific just spatial connectors that understand that you are working with geometries and can maybe re-project or whatever, and any help are welcome, and that's it. Okay, thank you. We're holding up for questions. 30 seconds.