 Hello, everyone. I'd like to welcome you to our next session, Transparent Web Platform De-Coupling with Multiplying Architecture. My name is Amber, and it is my pleasure to introduce our speakers for today's session. Eder and Gila May. A few logistics before we get started. If you have questions during the session, please submit them in the chat window, and we will try to cover them at the end of the session. Or we will make it a point to follow up with you after the event to address your question. A recording of this in all the sessions today will be available after the event on the Red Hat Developer YouTube channel. We also encourage you to join us for live chat during the break on the main stage for live dialogue with Red Haters. And with that, let me turn things over to Eder. Good morning, everyone. Are you sharing your screen already, Capuleto? Yeah, let me do that. Just a second. Can you see my screen? I think so. Yes, I do. I hope others also can see your screen. So good morning, everyone. My name is Eder. I work for Red Hat for a long time, and I'm a tooling architect for the key group. And I'm here with my friend, Gila May. Gila May is the tech lead for all the infrastructure and foundation. We call that foundation team in our group. So Capuleto is the one that, together with his team, built most of the stuff that we are talking about here. And on the next slide, there will go a little over about what is key. Key is an open source organization. There is an umbrella of multiple projects related with technology and business intelligence. So we have a bunch of open source people or projects including the rules, JBPM, Cogito, Optaplanner, and nowadays, Capuleto and myself, we are working in a Cogito version for service workflow and service orchestration. And in the next slide, you'll see what we are building. We are basically building all the tooling related to you, right? With a nice EMO editor with a lot of stuff, a lot of augmentation that can fetch open APIs to allow to you, draw and orchestrate all your functions, your K-native functions, and run those in an over K-native open shift environment. But before that, like, we spend, I'm 10 years at Red Hat. I was joined to build a project called R-Champ-Pam that's in communities called Business Trental. It's a huge business application that is a monolith that is composed of almost two million lines of Java code and front-end code. So it's a huge, with a big team that we spend a lot of time. And this application was a big monolith. And we are really proud of this, all this infrastructure and everything that you build. But under two slides after, a couple of minutes, please. Two years ago, on the next slide, two years ago, we started Decojito. Decojito was at Red Hat. We decided to revisit all the business automation platform that we had, that instead of having a single monolith where you can alter and deploy everything, you can do a microservice. So everything that you become a microservice and during these initiatives, they give us a chance to also to revisit how we build tooling. Because in the past, we started with a few engineers. I think that was the fourth of them. And at this stage, 10 years later, we are a team of almost 30 front-end engineers. And this team will become really huge. So we decided to go a step back and understand how to build a better platform. On the next slide is that when we started to do the beginning of this analysis, so basically, we started to decide what is the most important thing in our platform. What are the core codes that we want to reuse to choose the new infrastructure? Because one thing that is really important for me is don't throw away stuff. Because to build a graphical editor and that we have a drag and drop and support and have a huge specification is a multi-month, multi-year effort. So I want to reuse it, everything. So we start to do some questions. And the first question that we decided in our team is, on the next slide, is how we adapt our 10 years legacy to a new platform. Like how we can adapt to make this big monolith that we build to become modern. And the next question is we are building from that team that started with a handful of engineers to a full-stock big group. And how we are going to adapt and this. And the next slide, please. The next, please. And the third question is how I take my big front-end of monolith that is huge with millions of lines of code and break it in a smaller piece. So we have three questions. The first, how you don't throw up stuff and how we can make a big team productive and how we break up that monolith in smaller pieces. And in the next slide, we see the answer for that was to adopt micro-front-end architecture. That is a architectural paradigm to build modern front-ends. And what is micro-front-end? I will give in a five-minute introduction just to have context for what we show that micro-front-end is architectural style where you can deliver independent of the front-end applications that are composed in a big application. So you take the concept of a micro-services in the back-end and bring it to browser. So instead of having a huge monolith or your app, you break down your apps in multiple smaller ones. And in the final application you tie everything together in a single unit. And on the next slide, is the benefits of micro-front-end technology. You can have incremental upgrade because each smaller component can be upgraded independently. You have simple and decoupled code bases that makes the people that are working in our team really efficient because you have less context to deal. Each micro-front-end can be run as a standalone. So again, everything can run just the pieces that he is working instead of having a 10 years huge application to deal. You can have independent deployment to release and everything together can empower your team to be more autonomous. This is really important for us because our group has multiple business units that are involved at Red Hat and now also IBM is collaborating on this infrastructure and re-use this infrastructure in some of their projects. On the next slide is a sneak peek of how we build this architecture. So we take the monolith that we have and instead of building everything together you break down in a a lot of smaller components. In blue you can see a container app that a container app covers everything in the application and tie everything together. And every component editor is a micro-front-end. The menu that is a navigator is another micro-front-end and in the top is also a micro-front-end that is responsible for overhand the menu but do actions like save, download and do the deployment. And on the next slide we take a look about how this fits with microservices architecture because on the left you have the old style of applications. That is good depending on the context where you have a front-end, a back-end, talk in the data store. We call this a monolith. In the middle is what we know as a microservice architecture where you have a front-end that talk as an API with multiple microservices and go through the data store and in the right are the micro-front-end part-down that you take the front-end with a monolith and break it down in multiple components that talk with the API gator, that talk with a multiple microservice and talk with the data stores. And in the next slide you can see that how this ties together because you have one container that have multiple names for this. It could be container, it could be app shell but it's not the container for Docker. It's the container for the web application. So we have a container that is responsible to fetch all the micro-front-end and decide where to display in this screen. So where to show each micro-front-end. And after that each micro-front-end runs in isolation talking with his own back-end that is BFF stands for back-end for front-end, is a pattern for back-end for micro-front-end that talk with the data store. It's important as a microservice architecture to don't mix, don't talk at each microservice to the other. And one important thing about micro-front-end is that two types of integration. The first one is the integration how the container integrates with multiple microservices. The first one is the runtime integration, is the client-side integration, is that the container when it's loading in the browser page will fetch external resource and then take and then load and handle the micro-front-end. So the micro-front-end leaves outside the main deployment of your application. There is a big prop for this architecture, this style of integration that every micro-front-end can be deployed independently but it's also, there is a lot of cons that the tooling in the setup is much more complicated because you have instead of one deployment for your front-end, you have multiple deployments that need 200 scale you need to have more complicit deployment and tooling for your application and your test will become much more harder to do an integration test because you need to test multiple. And on the next slide, it's a second type of integration, there is the build-time integration. Build-time integration is that every micro-front-end is built and developed in isolation but before going for production there is a built-step that take the source of all the micro-front-ends built together in a single unit. It's easy because it's easy to setup and understood but the cons for this is that container needs to be redeployed every time that change is done. So in the next slide, we use build-time integration because for us it's easy to do an integration test and also it's easy to handle the deployment and to be sure that all the application scales consistently in our case you select the build-time integration and what are the advantages the biggest advantage in the next slide that we take about in general once you ask me about in five units introduction for micro-front-ends what is the biggest benefit is our autonomous team. In my group that are multiple teams between C and 7 and even one team now at IBM, they are working in isolation, are working together, they have a smaller team, they focus just on the problem there is no noise of a big monolith and then in runtime we pack everything together so the teams can be more autonomous on the front-end with this architecture. There is still the cons that there is bugs that can appear just when you do the less deploy because the team work independently but in our side this is being a huge beneficial. So in the next slide is when you talk about doing decision about sorry I have about just about what you do if you run time integration or to build time integration is how far we are going to go with the independence. If you are going to go smaller, you go with build time, if you go to hard core, you go to client side and now you talk about how to go deeper with multi-plane architecture and how you apply micro-front-ends in our technology. Yeah, so to follow Edder's presentation I will talk more about our use cases now. Edder gave us a nice introduction about everything but now I am going deeper on the architectural decisions that we needed to make to adapt to these new challenges and pursue these goals of moving from the monolith to micro-front-ends. So as Edder mentioned, our editors were leaving this huge monolith application and we wanted to extract only the editors from this and distribute across multiple mediums like a web application or a VS Code extension or even github.dev which is pretty much a VS Code environment and why not the github web page github offers a way to edit your files but we wanted to put our editors there to provide a richer experience for our users and this way it is not limited to it because we could have our editors on desktop applications or even IntelliJ and things like that but our goal was to put our editors inside all these mediums or channels that we call them and keep the editor code the same reusable and distribute it across different channels so with all that in mind we created the multiplying architecture the multiplying architecture is basically a set of patterns and plumbing code and APIs that we've created to achieve this goal and to extract our editors from that monolith and distribute across multiple channels so here is the core abstractions for the multiplying architecture we basically have three core components the first one is the channel the channel is, as I said everything that's outside the editor so it could be VS Code it could be on web application desktop application and things like that and this channel wants to render the editor inside it and we wanted to distribute these editors across these different channels the second component is something that we call the envelope the envelope is nothing more than a communication layer for the channel and the envelope so the editor started to make to make them work together but in a decoupled manner so people working on the editor can focus on the editor and people working on the channel can focus on their channel in an independently way and the third component is the view itself because it's important to highlight here that we are not limited to editors of course we created a set of code for our editors because we work with it but this is not limited to editors it could be any view any application that could be wrapped inside this envelope and distributed across different channels and an additional component that I just said is the editor itself that it's nothing more than a specialized type of a view so not that we know the core components about the architecture here is the an image showing what is each part of the architecture so here I'm showing the online channel that we call and you can see around the online channel we have many tools to interact with the editor but everything is done using a common contract that the moving parts need to implement and inside the envelope instance we have the editor itself and this case is even more complex because we if you could take a look here we have two editors right one for the text and one for the diagram so we needed to wrap two envelopes inside an additional envelope so that applications that need this envelope could could reuse this entire envelope of the combined editors that we call or applications can use each editors independently and here is just a diagram showing everything that I just said to have this visualization about our outer envelope involving two more envelopes inside here is another example that we distribute our editors inside the VS code here you can see that we don't need the text editor here because VS code already has it's built in a monocle editor and we could and we can leverage from this editor and for our case we just need to put our diagram envelope there and our extension is ready to be used and you can see here that the serverless workflow editor is the same for the online channel for the github.dev is the same or VS code.dev it's the same experience with just small adjustment we could publish our extension there too here's another example of a crumb extension that I was previously talking about in this case we needed the text editor and the diagram editor and in this case we reuse the combined editor here so this pretty much sums up the architecture I mean it's a high level view of course but it is not that simple but in general what you need to do to to distribute your editor across different channels is to implement an envelope API in a channel API the envelope API is all set of functions that the channel can call from the editor so for example each channel has different ways of triggering the undo and redo operation in an editor so once it happened on a channel they can just call it from the envelope API and the editor will react to that in its own way on the other hand we have the channel API and the channel API are a set of functions that the envelope can call from the channel but the editor doesn't care where it lives it doesn't care if it's online application or but through the envelope they can call these methods and the channel API in each channel will react in its own way so for example if I have a button inside my editor that opens file in a particular path I can call this open file function in my channel API in each channel will react on its own way so for example this code could open a new tab with this file where the online channel could open a new tab of the browser and things like that so this is pretty much a general view of our architecture and how we grouped our things and now I would like to show you more examples of editors that we have built first one is the DashBuilder editor DashBuilder is a nice editor for you to create richer visual visualizations and dashboard using YAML and you can fetch data from other external sources and things like that but the point is that the DashBuilder editor is something that we distribute on VS Code and in online channels and you can start using the DashBuilder editor on this URL another example is the BPMN editor this is the business process editor that we offer the same experience this is VS Code and this is the KeySendBox online channel for the BPMN editor and you can see here that the editor is the same but they are living in different environments same thing for the DMN editor the DMN is for decisions and here is the VS Code and here is the KeySendBox the online channel and again the PMML editor here is the VS Code and here is the online channel another great example of editors this is a great example of cross-team collaboration because we collaborated with the COWTO team for them to create their VS Code extension they already have their online version of their editors and with our collaboration we were able to publish their VS Code extension for their tooling and last example this is this is an example of using the combined editor here but in a different way because this editor you can see here the text editor is collapsed but I can show it but this editor is in read on mode and also we can color the nodes because in this case this is not for authoring this is in runtime so when the serverless workflow is run we can interact with the envelope and color the nodes that the workflow passed through and this is an online channel but it lives in a Quarkus extension which is pretty nice so if you want to know more about our code we have a bunch of packages to make this all real and this is our repository this is a mono repo that we put everything there so if you are interested in learning more you can reach us out there also the urls for you to try the serverless workflow and dash builder editors and the dmn, bpmn and pml editors on this url too so to finish the presentation I would like to show you some authoring highlights that we implemented around the editor but here it's important to say that the editor lives in the channels but they don't know what's happening around the around them in the channel so with that we could we could create many cool things many cool features so the first one is the modifier support in the browser you can see that I can we store all the files in the user's browser and you can move from one file to another and you can see the editors loading on the on the channel in this case it's the online channel next one is the autocomplete because we use the monoco editor we can leverage from autocompletion so autocomplete properties or structures or entire code snippets and this is pretty nice because this is very useful for users that are trying this the first time or need some help to create some code structure and things like that and despite showing this in the online channel on VS code it works too and Chrome extension and this is this is working across all different channels another example is the validation since we deal with text text files we can validate the files if there is some problem and if there is each channel can react on its own way so for example here in the online channel we show the validation problems on this panel but on the VS code we use the built team problem step that VS code has another nice feature that we built around the editors is the GitHub integration so here I open a sample the sample just have one file and I will create a new repository on GitHub and another nice thing about this that we implemented is the Quarkus Accelerator if the user chooses to use the Quarkus Accelerator we will create a Quarkus project already set up for serverless workflow and we will place the serverless workflow file that the user is modifying into the main resources folder combine everything and throw it in the GitHub repository let me just beat up a little bit this video so here we created a repository you can see here Quarkus Quarkus files like Docker files and things like that and I'm going to show that people can work on one channel and since we have this source of truth which is the GitHub we can commit on the GitHub page and pull the changes back to the the online tool and do the same all the way around now I'm editing on our online channel and push the things back to GitHub another integration that we did was with VS Code once we have a repository we can import this repository into our online tools and with that we can let me just speed up a little bit sorry I clicked outside okay so importing my repository here and once you import your repository you'll be able to access this repository using VS Code.dev or VS Code desktop you can see here all the files are here and we offer the same editor experience and now I'm going to do the same experience that I did in the last video which is updating something push to the repository and pull this change back into the online channel I'm pulling the changes there we go so this is interesting because each user can work on their preferred tool right users can work on VS Code.dev or the desktop VS Code or even the online channel or Chrome extension or even other channels that the editor can be provided another thing that we did is the integration with a samples repository to help users to explore our editors so these samples live in a different repository which is cool because people can submit their own examples and these examples will be available for everyone to try so here you can see opening many dash builder examples and server loose workflow examples another cool thing that we did is the integration with OpenShift we built a set of code to interact with OpenShift from the front end so once you connect to your OpenShift instance you can quickly deploy your server loose workflow and in this video I already had one deployment ready and just showed how easy it was to deploy just a clicker of a button and this deployment it's the same that it has this application inside it and we just put the workflow inside it and users can use this deployment and share with others and the cool thing on this example is that I'm using the developer sandbox for OpenShift which is free so our tool is free and the developer sandbox is free and so the user doesn't need to expand anything to try these cool features another thing that we implemented this is the DMN editor and this is a simple decision that based on inputs based on inputs the decision shows if the driver's license should be suspended or not but in this case we have local backend services running on users machine and this enables to the user a quick feedback on their altering so you can see here on the user interface that the diagram is being validated the editor doesn't know about it but it is connected to local and optional backend services another feature that we are experimenting is DevMode we created a DevMode image for users and this DevMode image is basically Quarkus application already set up with all the dependencies and extensions and running on DevMode so once the user connects to their OpenShift instance and chooses to use the DevMode a new deployment running in DevMode will be created for them and this enables them to quickly try the serverless workflow let me speed up this video so here I'm showing that I'm opening a serverless workflow and uploading my serverless workflow to the DevMode deployment which is pretty fast because it's Quarkus and I can run my workflow and see the result all the nodes are painted and I can also quickly test my changes I didn't edit here on the message and again upload it to DevMode and I will run it again and see the edit there and this is not limited to one file I can move around files and for example here I went to another sample and uploaded it again to DevMode access it and trigger a cloud event to run my workflow and last this is my last example is the envelope communication this is for the editors itself but here I show that through the envelope we can communicate between envelopes because here I show that I'm clicking on the nodes and the text editor reacts to that and moves the cursor to that node and the other way around works too if I click on the with the cursor I can move the diagram editor can react to that through the envelope and highlight the node in the particular place that's all that I wanted to show and I'm going to pass you back either thank you so much and the most important thing is that in my point of view I'm working with a frontend developed for more than 15 years is that good frontend development is super hard and the adoption that we decided like two or three years ago to go full speed of micro frontends breaking down our application is smaller pieces to make the things go independently and to develop really fast and with small scope and with a clear interface prove that is a great architect that should invest because all the benefits that the backend people talks about building micro frontend sorry micro services and reuse those micro services in different ways prove for us in the that's the benefit to adopt micro frontend architecture besides that the decision to adopt micro frontends allow as Caponeto showed our team to build a big ecosystem beyond it to make the frontend is multiplex to talk as the micro service frontend lives independently as you see from the editor for instance Caponeto's team was able to take the same editor micro frontend editor and put to work in the visitor code and the editor team to change one line of code so is a huge benefit and if you are invested in frontend technology I advise you to go and take a look on those and in my final words is that if you are doing a monolith web application remember that in the last 15 years I see a lot of shiny web frameworks becomes the best thing and gone so think a little about what would be the impact on your web application if your favorite framework is not the most shiny right now and then figure out what would be the impact of your team decided to we need to go to another application micro frontend is one technique and one architecture background that can help you to avoid the big right here right so this is what we have for today and I hope this time is really helpful for you because if with micro frontend you can choose between have total independence with the team with a terrorist combination with our beauty tools and thank you so much for your time and we are here for any questions on the internet to reply thank you both for the presentation and just a reminder everyone please put any questions in the comments chat and we will give you a couple moments to do so thank you Jeff, Louise and Wolfen for coming alright well with that thank you all for joining us today and we hope you all enjoyed this session as a reminder this session and others will be made available soon on our red hat developer youtube channel thanks all