 Cool, so hi everyone, this is Christina and for those of you who doesn't know me I don't have an introductory spread of myself because you can always find me up in Google I am a technical evangelist now I'm moving on to a role as a portfolio architect, which I talk about all Everything about rarehead technology. So this is one of the project I did in April So I'm very happy to share that with you and I would love to hear your feedback What you think about in this topic, but this is strictly what I think and this is what I want to This is how I put together overhead technology and create an event mesh in a multi-cloud world So everything will be around in the multi-cloud topics and How to build event mesh using chemok and then also trying to make it serverless Well, so I want to start by Introducing the the whole idea why I came out with this topic Well, I was doing some research with our customer of course, I talked to customer all the time and One of the so one of my colleagues that went off and did a quick survey on most of our customers So this is kind of what the industry is today where most of 90% of the enterprise and companies are now going towards this Cloud native journey where everybody's trying to put their Applications their service their workload on top of the cloud, right? So this so this trend of going on to the class is Inevitable, right? So everybody's going on there, of course But when you think about going on the cloud, there's so many different meanings on going to the clouds There could be just doing a quick lift and shift where people have their own large Applications where when I'm talking about more of the applications They're just doing a lift and shift and then putting that onto a cloud environment or they're doing some migrations or some of them like So or just doing some kind of infrastructure purchase on the cloud where they're still Developing their workload and everything like that, but there's another genre of people They were saying they're in the cloud, but they were using SAS applications They're you they're purchasing those pre-built service from the vendors And then integrating with what they're already working on So that's another part of where people are doing right now And and then if you take a look at closely of where they're placing all these technologies a lot of them, I think 60% of them Are going towards a public cloud, right? And among those ones About almost half of them were doing Thinking about going hybrid or multi-cloud well So so I was I was curious and then I tried to dig up and try to figure out why is this trend, right? so of course, there's a geolocation problems because There's a trend. Well, there's a there's a Observation I see where people tends to deploy deploy their applications or their services close to where their business is If they have their the branches in certain areas They tend to like to have their data centers or the regions where they deploy in the cloud closer to that region close to that place and also they want to sometimes they just want to expand to another Geolocations, so therefore they will set up another service in that location as well So they therefore you would have multiple clouds or multiple Clusters of data centers or places where you have to manage them together And another problem why I'm seeing people would have multiple cloud or this going towards the separate clusters is when it comes to securities and compliances and Because that in different countries different worlds surprisingly not all of them use the same Laws right so they will have different regulations and you don't want to deploy every single country's regulations in Another country or in your headquarters it becomes very complex and it becomes very cumbersome when it comes to maintaining right so you want to make sure Sometimes people want to make sure that the the the code stays where it is That's why you would sometimes seeing that being deployed in a separate cluster separate cloud Another one that I see is data gravity. I think this is probably why the most the like the most heaviest like the The most recent why people tends to have more multiple clusters or multiple clouds around scenarios is because data gravity Data tends to stay closer to where they were used say if you're heavily if your most Business is if your business is doing most of their business in say Indonesia Then most of your data will be in Indonesia of course and it doesn't make sense if you have a data center Serving them from North America right if you're doing things in India Then there's no there's no point of putting your up. Well putting your clusters in South America, right? It doesn't that make sense. So it so and then because of that and then also because you don't want to spend a lot of Money on ingress and egress traffic because if you're trying to Move these amount of data because you know where they are Where where things happens? There's a lot of there's a lot of data. You don't want to move all of them to different places so so that's why they tend to stay in a in a group and in the cluster of their centers, right and then another another another reasons why people are moving to different causes because well, I mean of course you want to make sure your Cloud scenarios your strategies are resilient So having the ability to be more a job moving from one cloud to the other makes your makes it safer So if Amazon says I remember like a couple of years ago Their entire region goes down. You still have a contingency plan where you can deploy that as your or Google Just making sure that everything stays Is is there's no zero downtime? So there's zero times and everything like that, right? And sometimes Another reason is because they simply have better offering in certain type of technologies. They Say Google is really good with AI ML, right and Sometimes their machine learning technology is a lot better than the Amazon So you want to use the one in Google, but the data centers your company were using is is AWS is Amazon So how are you gonna get all these two together? You still have to use the one that's Offered by a Google, but you still want to get all the data together So there's a this so then you cut then you would have two separate clusters And then you want to have the flexibility another reason why is you want to have the flexibility of? Bargaining powers So and exit strategies So when there's a vendor when if they have a problems or they tend they cannot provide you with any more services You still have that ability to move on to another place quickly but That's why people are thinking about moving to the multicloud world where people where you you have all these services being very agile, but with that with that Flexibilities and agilities it comes with complexity, right? So the biggest problems there's I I think there's three main problems of how wise people are Avoiding or try not to be a multicloud. Well, the first one is from the management side It's really hard to find people that can master the the proprietary Scale of each cloud, right? It's really hard to find a person that understand Google Amazon and As you're at the same time, right? So being able to find that skillful person to manage three clouds is difficult, right? And then you also have security because your data are no longer hides behind the big firewall It's not protected anymore. It's on the cloud people you so you have to make sure that the access of this data are secured only and Only the essential data's are shared across To different vendors or to your to another clusters or another cloud that you're deployed So that's another concern that stops people from moving to multiple clouds and the other one is Connectivities and integration and that is the one that we will be targeting today. So having that so first of all If you're thinking about multiple clouds, the biggest problem will be getting the data synchronized, right? So when you have data Constantly updating in North America, how do you make sure that all these data are synced across the pond to? to your European counterpart, right? So how do you make sure that they all synchronized? without sacrificing the need of Spending a lot of money through to do to do traffic slowing, right? APIs are very slow surprisingly so and So you have to figure out a way to quickly transfer and to quickly break the boundaries between the clouds too So the application users doesn't see that boundary They will be freely to use whatever they want to and quickly connecting to the services They want to have right so that will be the challenge and that will be something that we'll be looking at today And if you look at look if you look closely into the integration and connectivities You'll see there's Some problems the first one is service silo That's because like what I said service are mostly deployed close to where their workloads are, right? so you would You would end up with a lot of services only deployed to a certain cluster or certain cloud That is serving that particular set of customers But when it comes to Developing another service in another region. It's sometimes it's really hard for them to find out what is also available in another Places, so then you have this service silo where they don't talk to each other. It's really hard to Replicate them or you have to duplicate the code and deploy in the different places and then maintain them separately That becomes a really bad issues and it costs a lot of money to a maintenance Right, and then the other one is data integrity like what I said before How do I make sure all the critical data's are in sync? I'm not talking about those small detailed data because you can take care of them later later on But what about the critical ones? Are they being synced? In real time so that you get the information is right away and what about these pre packaged Services that you purchase such as Salesforce such as Service now or whatever that you purchase on the package software the SAS services you purchase How do you get the data out of it? How do you provide data into it? So that's another problem and then you have also have the point-to-point connectivity problem So this problem would very much Appear when you have too much too many API calls Right because API calls are point-to-point and when you're trying to connect to them When you have too many of them were connecting together they become another spaghetti Right. So, how do you know if I close off this API endpoint that would not cause series of problems on the other departments? Right, so that becomes a very messy idea. Well in the perfect world everything is going to be perfect, right? So we have everything in the synchronized in a in a consolidated Vision where you can see everything in a single plane of glass where you can unify deploy everything into multiple clouds and These applications are so easy and flexible to plan to port it to another cloud and They are all visible among each other so you can reuse them and calling them is not a problem data will automatically sink to another world And then because they're all talking in the same protocol. They're all talking in the same languages They all talk in the same format the developer just needs to connect to that place and they'll be able to get it That's the perfect world, but that's not what it's going to be like in the real world now. We all know this we've We are we live in a world where we know we know what's going on, right? So you would have a Department they decided to purchase as your because their manager loves as your and they just purchased this really Nice platform where they have deployed everything on Azure But then you have another department where they would say hey, I just got this really cool Application from I just got this really cool sales ordering functionalities and process set up in Salesforce So let's get all this connected together and the other department says hey We want your data from what you have just created and I want them to be analyzed in Google and then Shift that to Amazon for storages, right? So that's something that could happen and you would end up with a very complex Connectivities and then we sometimes we would also have that legacy systems that you still need to talk to and that becomes a very complex and spaghetti situations and that's Not really good. And then you also have APIs, right? Then you have also API cost and contract here contract there. I cannot break the contract therefore and Then the the cost there's different versions. I need to update the API versions all the time. So My solution for that is to create the event mesh So Data's are not Connect from one point to the other in fact, they are created in a grid mesh like where everything was connected and this mesh would grow or some mesh would Shrink and some part of the mesh will turn off when it's not in use. So that was the inspiration of serverless, right? And then all this information's are Transmits and then comes back in the form of events. So everything gets notified as events rather than a very slow synchronized call everything gets pushed into a a Single unified platform where everything go comes in and out gets monitored and managed by this platform So you get to see what's coming up. What's going on and therefore it's easier for you to Get the data flow between each other. So building that mesh is what we can do with chemo k and streaming services So in this mesh what we can do is we can So many of the talks in different protocols, right? So in this mesh We will be helping these components and entities to quickly transform the to translate the protocol into something they understand and then connecting to to the external components And the other one is everything is done in real-time streams So, you know how Kafka can handle a large load of events or streams of data This is how we're going to handle it. So all this information just keeps coming in where I would have my Kafka deployed as the platform or it keeps accepting all these events and then gets processed later And then these information so we want to make sure that we saved on increase and equals traffic So we thought we want to make sure that only the Important events comes in and comes out. They get filtered. They get aggregated. They get split Depending on your needs, right? So that's something that we need to do in this mesh and it has to be done for you in this platform as well And then also sometimes the format of the data can be a little bit different Say the data from Salesforce only accepts Salesforce format. So therefore I need to translate Whatever was in my database or in my s3 bucket On top of aws and transfer transform that into something that my Salesforce services understand, right and this Entire platform would do the monitoring for me. So I can I can see what's going on with my events and in my mesh And part of this is the part where I'm saying if we can make some of these applications or or we can do this In a soulless way If it's if there's no events coming in we can always turn it off to save some money, right? So this then this part of the grid would would automatically shut down and then comes back up again after After it's been After it's been nothing used, right and after it's been used And then we also have discovery. So discovery is something that It's I think it's it's going to break the silo. So for in each one of the in So in my platform, I would build a series of discover discoverable A good interface where they can see so this is similar to api interface Where you can see what are the contracts, but instead I'm just going to show you what kind of events Where I'm going to handle. So this is kind of what I meant by building the event mesh with streams and camel. So if you take closer look into the actual The actual format of the of the mesh you will see I have these are topics for So these white buckets are the Topics of of each on the streams. So in each topics I would define an event that I want to That I want to Receive say this one is a sales order event This one is a product order event and this one is an employment event, right? So these are all different events And so these so what I'm seeing are just a bunch of events instead of contracts, right? and that I would have Camel become becoming a connector. So anybody that wants to talk to my or wants to Wants to talk to my Or save into Send events into this will have to go through camel and camel will become that interpreter Or becoming that connector or fetch the person that fetches or the components that fetch the data from external services Into the events, right? And then they can they can do the the the apology the apology Transform they can do the data transform. They can do the orchestration They can go to several different places and then just gathering that to Place that into this event bus. So what I'm seeing in my entire system are just Just events. So and then they can talk to each other, right? And then so here you would have Microsoft so these these camel here can be either A microservices can be a function. So if they are functions, they become serverless So if nothing's coming into this Into this event this application or this functions will automatically shut down and then um, and then once This information comes in it will comes back up again, right? And the the good idea what I think it's a good idea is because you can always plug in And then remove it when you don't need it So say if you don't if you no longer wants to connect to Salesforce Just remove that plug off and therefore you're no longer accepting any events from Salesforce So you have a good idea of where you're coming in and where you're sending it to so it's a better organized way for your events And when so you don't have to plug in here and there and all your actual services That's listening actually doing stuff for your events doesn't have to do that transformation interpretation of the Of whatever information that comes in so everything's taken care of All the events are uniformed are unified and they can So when you think about domain driven designs, you have These domain object, right? So how you can map these domain object to your events and then see how events will change your domain object Or how you're going to interact your domain object to these events, right? So you can do that So these events become meaningful instead of just an event or just a request call, right? So that's what I'm seeing why this event mesh is a better way of Of communicating externally, right? And um, I don't know how much time I have I only have five minutes. Um, but this is my demo Probably I'm just going to show demo today. I have a video on youtube You can always go up and watch there. So the the idea was to show that the the camo can connect to To all these different cloud services And then I also have and then I also have a response back to to the What is it the the telegram But I want to show you this So let me skip that. I only have like five minutes left. Sorry So what I did there in my demo was I was using a couple things I was using a Kafka strings and I was using camel and camel caucus to build microservices And I was using camel and camel k to build functions So these become serverless and I also have a event source That's called camel that goes out and then grab informations from external services, right? So these are the code. Let me Share my entire screen You see So this is the code You see my screen. Yeah, all right So this is the code that I have and everything is in my github repository So, um, so so here here is the structure of the code. I want to go through that with you while I'm here But so if you go into the code, you'll see there's a aws There's an azure and there is a A service now So the most important one is probably the service now and this is a camel caucus application So this one is a microservices. This one is not even a This is not even a A serverless one But what this does is it does it goes off and then go to service now and then grab All the the service now tickets from and then send it to my internal Streaming. So I have events called service now ticket So that service now tickets gets then sent to multiple places and these gets sent Say for instance, this one in azure. I'm also sending it as a As a microservices and again sent into azure event bus But the listener because I'm not always getting informations into my into my system So therefore I'm writing this one from camel k and this camel k is very simple Where I only have a reader where it reads Where it reads from different so this is my Probably not a good one probably use the google one Yeah, just reads from the google place google pops up. So it's similar to what azure has offered and then just place it into my Into my Kafka or into my streams topics where it's it's received events Successful or failure events if my information has sent to google and then sending it back to me So this is kind of a this is a serverless application that I wrote and it automatically becomes that It ultimately becomes serverless. So it was scaled down to zero and scale up So in my video, I would I would show you how it's done and it will you can see it in action So I don't have five four minutes. So I want to add I want to see if there's any questions You don't need to worry about time since there are no other talks scheduled after yours. So please take your time Oh, okay in that case Do I have more Okay, cool. So there's no questions. Let me go back and then why don't I just play that clip then let me All right Can you see my screen? Yep. So this one is um, the one I did for So this one is showing you how I created a a ticket in service now Where I would create a ticket that sends informations into google And this is me just typing informations in the service now And once the informations gets entered, um, the the informations gets picked up by my service now microservices And then you can see the informations then goes To my telegram where telegram says yes, um, the informations get picked up by google So if that's good, then this is where I go to the google cloud Where it goes and show you the informations has gone into the google topic where you can see there's a spike and let's go to the Functions so this functions of on the google is the service functions It's listening for any events coming from the topic that I sent You can see that this one and this has been executed And let's take it take a look at the log and you can see the informations has actually gone into google cloud and then And then you can see that the informations has gone in and so how I did it was I first of all my first Application was written in camel quackers and that goes up to service now And then the second one gets picked up and then sent to google and then google There's another listener that listens from events happening from google and gets sent into service and this then picked up by Another serverless applications and then sent to my telegram So that's how it's done with google and the same to the rest of the Of my applications. So that was how it's done and let me stop sharing And the the rest of you can watch the rest of them from From my youtube video. So I am going to place the Link to my github repository in the chat so people in the chat can have access to all the code And please Me Okay, so if you have any more questions, um, just let me know but let let me close it off by Showing off Yeah, let me Share my video. Okay. Yeah, I should have like bookmarked it, right? Hold on. Let me find it first Oh, we cannot hear you. Uh, you're on mute Okay, okay. Yeah, so Can you hear me now? Yep. Okay. Cool. Sorry. Yeah, so sorry. Um, so I just pasted the uh the youtube channel right there Um, so if you have any more questions or anything, please let me know But um, I want to quickly share with you about Uh, let me just the wrong tab so this is the uh This is the things I want you to take away with you. Um, so what I did was I was using, um Camel so Camel was the one that I was using. So what I did was I did a lot of, uh, um EIP So enterprise integration patterns. So what I did was when I got informations from, um, service now I was doing some splitting so that because they all came in as a big package of With, um with everything that happened in service now. So I have to split it into smaller To-do lists in order to send to different places, right? Because some of them could be Sending it to google some of them could be to AWS so I need to do splitting and once I've done that I have to Because all the informations that Different clouds has has to be in different format. So therefore I need to do some kind of mapping so therefore I have a I have done some mapping right here. So I was using LS map, which is a drag drop tooling that I can do a drag dropping To map the format and another one I was using I want to point out was the The connectivity to Kafka. So for Camel to connect to Kafka, it was quite easy What I did was just say hey, I want to connect to Kafka and this is the The topic name where it's getting all the incident the service now incidents and that was and then just giving them the broker Informations and that was kind of it. That was what I have to do and to connect to service now There is also a service now component where I can tell them. Hey, this is where my service now instance is Please go ahead and then create my service now instance And that's kind of what I did and another one that I was using was chemo k I'm pretty sure you have heard a lot of about chemo k. So that's why I didn't want to like Dive into what is chemo k and there is a bunch of video on my channel that tells you what is chemo k So if you have you can no idea what chemo k is, please go and watch the video But basically what chemo k does it helps you to quickly convert your applications Automatically turns that into service if you have service installed on your kubernetes environment It ultimately just turns everything into service. So you don't even have to do Anything you just does it just does that for you? so What what I did was I quickly wrote a function like chemo route and it just automatically turns that into a service function for me And another one that's with worthwhile is the event source connector, which they now call it chemo So basically you have a marketplace of connectors where you don't have to do any coding You just tell you just do some kind of the care you just do the colorative Connectivities where you automatically just filling where you wanted to go when where The credentials it's going to automatically fetch the data of your configurations So that's basically what I did with my demo And what I want you to take away with you are just the basic idea of how chemo can help you with Connectivities, but I think the most important part is the part where I talk about event mesh I think you should start thinking about how you're going to architect your system Think think about it differently than just creating apis and contract think about When you are designing your ddd your domain driven design Think about events think of them as one of the bigger entities when you're designing it and Rather than just restful calls, but think about how you're going to use the events to trigger or to Communicate between in between your domains as well. So I think that's it for now Thank you very much and I wish you enjoy this session Thank you. Oh, thank you so much Christina. That was great And if anybody has any questions, uh, please post them In chat and have you posted your slides? and links in the shed.com profile Oh, I have not I'll do it right. I'll do it now. Uh, that would be awesome and the recordings will be made available later Okay, cool. So that's the link to my slide. Um, if anybody wants it, um, just right now, but I'll put it in the I'll put it also in the profile as well. Awesome. Thank you so much Thank you Thank you for hosting it. Thank you. Bye Thank you all for coming