 Hi everyone. Hello there. Ramon Roman is in here, product manager at Red Hat, maintainer at the conveyor project. I'm Juan Manuel Leflete Estrada, senior software engineer at Red Hat and also a contributor to the conveyor project. And yeah, you might have guessed it, but we are going to be talking about the conveyor project. So, what is conveyor? Conveyor is an open source project to help organizations with the onboarding of their traditional workloads into Kubernetes to leverage cloud native technologies. And our approach to that is to provide as much insight as possible for the migration and modernization leads to make informed decisions. And on the other hand, for the developers that are performing the actual changes, provide some guidance and some degree of automation. The way our project is a structure, we have a central piece which is the conveyor half, which manages the user experience. And then we have a series of other modules that provide additional functionalities. We'll go through them. First of all, we have portfolio management to allow organizations have a holistic view of their application portfolio to categorize obligations and make decisions and find suitable migration strategies for each one of those applications. Then we have the assessment modules to have a high level understanding of the application landscape and detect risks that might affect the containerization process. Then we have the analysis module that is able to find source code anti-patterns that might prevent an application from running on the target platform and provide some hints on how to solve those problems. We have a planning module that allows you to break the modernization initiative into different migration waves. The reporting module that allows you to get further insight on issues and dependencies. And finally, we have the code transformation module that provides a way to automate simple and tedious changes across the portfolio. I'd say the most exciting one will be the analysis module. So Juanma here will tell you a little bit more about that. So in our latest conveyor release, which we did a couple of months ago, 0.3, we included a new analysis engine. Is everyone here aware of the language server protocol, language servers? So, yeah, so we basically, in our new engine, leverage the power of the language server protocol that Microsoft developed for VS Code to be able to handle different languages and use that to abstract our analysis engine away from the complexities and peculiarities of each language. So the idea is that, as you might be aware of, there are language server implementations for almost every language up there. So our idea is to keep adding new language, new providers for analysis, to be able to analyze applications in different, in more languages. And to basically make queries to the language servers that will allow us to get insights on the application, the codes of the applications. So in our latest release, we have support for Java and Go, and there are more to come. So as I said, in our latest release, 0.3, which we did a couple of months ago, it was a major leap forward. It included, apart from this new analysis engine that I just mentioned, the concept of archetypes, new concept for organizing your applications, a re-implementation of the assessment module that Ramon just mentioned before. And additional enhanced reporting, sorry about that. And for 0.4 and 0.5, which are our next releases in Q2 and Q3, we basically plan on adding new languages, as I said. So we're expecting in Q2 to have .NET and Python support, alongside some quality of life improvements. And also in Q3, we will have TypeScript support. On Q1 next year, we plan on releasing Conveyor 1.0, which will come with three big new features. One of them is platform awareness, also assets generation and integration with generative AI, which we'll speak about in a second. So the first of these three major features will be platform awareness, which will basically be able to inspect not just the application itself, but also its runtime, its deployment, and its runtime to be able to extract more information about the application and not just to the application, but where it runs. And this leads us to the next feature for Conveyor 1.0. And the next feature will be assets generation. So we retrieve the configuration from the platforms in which the application is running, and we use that to generate assets to get the application deployed in the target platform, for example, Kubernetes. So if we were talking about an application to deploy in Kubernetes, we will have something that will generate all the deployment manifests for the application. But for example, if the application is running on a application server, we will also be able to generate all the configuration files for the application to be deployed on the target platform. So pretty interesting. And I would say everyone is playing with GenAI. We are playing with that as well. This next module, CHI, which is the short for Conveyor AI. We suck at naming. The idea is to automate source code transformation as much as possible by leveraging generative AI. And our approach to CHI is to leverage the structure migration data that we have at Conveyor and use that to enhance commercial LLMs. So the idea is to use prompt engineering and a rack approach to that to provide additional context to make these LLMs more accurate on their responses, especially when we're talking about custom technologies, custom frameworks, adding that additional context for the LLM to be accurate on the responses. And that's all we have for today. You have our coordinates in there. It will be Thursday afternoon, Friday morning, at the CNC of Pavilion. You have the QR to our website. And if you're going to get involved, we have a room at the Conveyor, sorry, at the Kubernetes Slack instance mailing list. Just come and join us and tell us what your pains are with modern insights on the migration. Thank you so much. Thank you. Thank you.