 Good morning, good afternoon, good evening, wherever you're hailing from, welcome to another episode of The Level Up Hour. I am Chris Short, executive producer of OpenShift TV. I am joined by a bunch of Red Headers this morning. People I have had on the channel before and new people, so it's great to have them all, but I will hand it over to the one and only the illustrious, Langdon White. I think I've been on this show before. Yes, you have. And he's been on the channel here and there. So just in case anybody wasn't sure who you might be referring to. So let's quickly introduce the show and then I'll ask our guests to kind of introduce themselves. As I often comment on the show, Red Hat is notorious for changing group names and changing role names and all that stuff. So keeping track of what someone's current title and what part of the company they work for is often entertaining and challenging, but we do what we do. It's funny because so often our job doesn't change just everything around it. And so we're kind of like, oh, all right. So we're now part of this group. So we'll do that in two seconds. But as always, I like to share my amazing artwork of my slides, no fault to the designers who actually design the graphics. This is just my cobbled together version from their good sources. So this is the level up hour where we talk about containers and we talk about why you, who may or may not have any familiarity with containers, really should get familiar with them because they're super handy. They're not just like another load of hype and they are really useful even when you're just using them for kind of, we've had episodes about tools containers and about kind of running your own personal services and all kinds of different things. They're great, they're really, really useful. They're so much easier than a VM. And so we hope to convince you. And then, oh, today I forgot, Chris Short is back. Look at that. But last week we had, I guess two weeks ago now, we had Andrew Sullivan was hosting the show with me because Chris went off and took a vacation. Can you believe it? I know, it's, well, no, last week was vacation. The week before that was necchiotic. Oh, yeah, so that was gonna work, so right there. So yes, I forgot to update the slide, but Chris is back with the Twitter handle where you can always find us, Chris Short. My Twitter handle is Langdon with a one, but feel free to ask Andrew Sullivan whatever you like about the show because it will entertain the both of us to know him. Or you can also come to his show, The OpenShift Admin, which I believe is on and- In two hours, yeah. And you can ask him about our show there, which would also be- Which would be hilarious. Right, right. But you can also join us on our Discord. And I assume you're doing the Magic Discord link in the chat. And so you can find us there. We kind of have dribs and drabs of conversation. We also have been told by our corporate overlords that we also need to say, click like and subscribe down below much more often. Yeah, wherever you're watching us from, please subscribe and follow whatever it is. Right, whatever technology you're using, that way we can show how awesome the show is to everybody else who like numbers about things. And feel free to ask us questions in the chat at any time. We try to keep track as much as we can. So from last time, which yeah, two weeks ago, last week was Summit. So we didn't have a show, but the week before that, we talked about the Container Health Index, which is a feature of play. And I thought that episode was quite interesting, not only from, wait, just didn't update. Sorry, so this says the Container Health Index is today's episode, but it's not. Today's episode is about Tackle and Convayer. And we'll talk about that in two seconds. The link to the show notes from the last episode is the exact same thing you see here, except it's E37, obviously. And so yeah, it's weird how the, I wonder if the slides didn't update in general, because I just edited them, but they didn't seem to take. So I will make sure to stop and restart them before we start the next thing, so the points are right. So starting off, let's see. Ramon, do you wanna introduce yourself? Because you are at the top of my screen? Absolutely. So hello, everyone. My name is Ramon Roman Misen. I am the current product manager for the Migration Toolkit for Applications, which is the current downstream distribution for, well, the future downstream distribution for Tackle, the tool from the Convayer community. And, sorry. And Phil, do you wanna introduce yourself? Yes, hello. My name is Phil Katanak. I'm an engineering manager that works in the middleweight department of Red Hat. And I'm responsible for the development of the Windup Tool, which gets productized as the Migration Toolkit for Applications. And my engineering team have also been all working on the Tackle project within the Convayer community for the last six months. And we're very close to the first release. Cool. That's always a nice feature. We were talking about swag before the show started. Will there be cool t-shirts for launch day? I don't think we'll take too long to introduce you. Have we? I don't think we have anything planned around that. Sorry to say. This is not the Red Hat way. You really need to think about what swag you'll be doing. I still, well, when it's a little colder out, I still wear my release of REL7 jacket, which is super nice. So, you know, it's something to consider. We're always, I'm always about the swag. So why don't you tell us a little bit about, like kind of, you know, either one of you, you know, what kind of is the goal here? So not necessarily, and we're trying to talk primarily about Tackle because I think we have some of your, you know, compatriots, I guess, on the Convayer project because Convayer is actually like a bunch of different projects on future episodes. So kind of what is specifically Tackle about? Okay, cool. So Tackle is a toolset that is aimed at the application refactoring, migration, modernization towards Kubernetes. The idea is to help users out there to leverage all the different features that Kubernetes provides for application development and lifecycle management in general. Okay. And why, like, and so what does it do to kind of try to solve that problem? Like what is it about, like, you know, what's the, why can't I just, you know, go flip a switch somewhere? What do I have to do? What's involved in bringing an application onto Kubernetes? Okay, yeah. Well, the idea of Tackle is provided in some sort of guidance for users. Basically what we have right now is a set of automated analysis tools that go down the source code or the binaries of the different applications you want to migrate and provide some hints on how to actually perform this kind of migration. We are also adding some other tools. For example, we are about to release the application inventory, which is something aimed at, you know, helping users out there govern their, their own application landscape and application portfolio. So the idea is to have everything, you know, in one place and have some integration between the different analysis and assessment tools that we have within Tackle. So that's it for the analysis. We are also adding a lot of, you know, very cool, very sci-fi, I would say, analysis tools coming from IBM Research, which is our partner in, in crime in, in this tool. And also we are adding an assessment tool. You might be familiar with the name Pathfinder, which is an assessment tool that was originally developed in Red Hat Consulting. And what we are doing right now is basically rebuilding it from scratch and sharing it with the Tackle community. And it's something that if everything goes well, will be available in a couple of weeks out there. So I noticed you said IBM Research. Do you have to like pay 25 cents or something to say Watson? Or I'm very impressed. You didn't use that term. No, for the moment, we don't have anything related to Watson, but we are definitely leveraging the knowledge that the IBM Research guys have about AI. So lots of AI, lots of very scientific, mathematical stuff applied to this problem space, which is really, really, really cool. So we're partnering here. They are providing the muscle and the brain to develop this, this kind of, I would say revolutionary approaches to the problem space. And we provide our knowledge in this kind of largest scale migration projects we have been doing for years. Yeah, so just responding to that a little bit more long than you were talking about artificial intelligence. So one of the things that's getting really shortly as part of Tackle is the application inventory. And that's like a hub for the application modernization and migration process. So it's a set of repository of applications. You know, your portfolio of applications really is a customer. And it allows you to assess them through a questionnaire-based tool, which is Pathfiner or analyze them through wind-up. So automated code analysis to try and figure out which ones are good candidates for containerization and migration onto Kubernetes and which ones potentially have sort of cloud native anti-partners that need to be remediated before you could consider migrating them across. And one of the tools that the IBM guys are working on is a thing called application container advisor. That's going to be using AI to take characteristics of the applications in terms of which software components they're built on top of and then sort of recommend which container base image could be used to host those applications. So that's a really solid example of AI. That's, yeah, that's interesting. Basically by using, or I would imagine, right? Using some sort of machine learning around like prior art essentially. So where it's been successful in the past, you're likely to be successful in the future kind of idea. For the moment, the AI is more focused on understanding natural language. So you basically describe the application using natural language, using an attack model that we have built within the application inventory. But yes, like you said, this machine learning piece is on roadmap for them. So they want to be able to learn from previous decisions in order to recommend different container images to containerize your applications. Yes. Right, right. So I think just kind of for the audience to kind of wrap their head around this, can maybe one of you share like where do I start? Like what's that kind of first screen to kind of wrap your head around kind of what you would be approaching this problem of? You know, excuse me. Do you start more from the portfolio side or do you start from one individual application side? Yeah, so I mean, I can share my screen any set if you would like me to Alangmint. So essentially, yeah, the first thing to do I think is to import or one of the import collection of applications. And that's some characteristics that describe those applications. So I may I'll share my screen just to give you a flavor what the application is. Yeah, I think I'll have a good. Yeah, okay then. Okay then. So hopefully you can see my screen and just make it a little more presentable. So what we have here is one of our test environments which has the application in the application installed. And what you can see here is a list of applications have already been manually important the application through the UI or being imported through a file import process. And each application has a name, it has a description and it's linked to a business service. So business service is just a bit of metadata that helps describe which business service this application supports. And are those like arbitrary? Are those okay? So like I can kind of say, okay, I have these various groupings of applications within my portfolio. Okay. Yes, yeah. So if you were to say retail bank and you'd have things like retail lending to customers for mortgages, home insurance, personal loans, all these sorts of things. You could create a separate business service for each of those different sort of verticals within your overall organization. And then you would link the applications to which verticals those applications contribute towards. That's the idea behind them. Yeah. And once you expand on these applications, you can see a little bit more detail about them. I'll just click on one that's been assessed already. So the idea is that we have this extensible tagging model that allows users to categorize the applications in as many dimensions as they want. And we can group tags together into into groups we call type types. So we could say, and I'll show you how those type types are presented to actually, I'll just click over at the controls. So the controls is a set of control data that add value to the application image itself or to the assessment process. Most interesting, one of the controls five and total is the tags. So basically tags are grouped into tag types and then each within each tag type you have several different tags. We do supply a pre-shipped set of tag types and tags but obviously the customer is totally at a limit to replace them and add their own, that kind of thing. So they can add as many dimensions to their applications as they want. This is a good example. You can kind of go into, okay, so here's where like, so like how do I use those tags? Like how do I kind of go see an application view based on the tags? Yeah, so essentially just if I just demonstrate the tags then I'll go back to application memory and then you can see how we leverage them. So essentially I've created, five different tags on the database tag type for the different flavors of databases out there. Obviously typically an application will be using one or other of these databases for persisting data. If I go back to the application inventory and I want to look for applications that use a database type, I would search for date tag and then I would just nip through the list of available tags on the drop-down list. You can see the tag type is qualified underneath. So we've got operating systems and all these sorts of things, one times databases. So if I wanted to look at my DB2-based applications this would give me the subset of applications that contain DB2, yeah? And if I wanted to further distill that down and find those applications that run on the ZOS operating system, again, I would just look for ZOS as well. As the operating system and then I get even smaller list, yeah, once you've expanded this, they expanded. Yeah, so that's a subset of applications that run on ZOS and use DB2 as the database management system. So this is kind of at the portfolio planning level, right? So you would imagine that this is the, this is the purview for lack of a word of like an enterprise architect, right? Or someone like that, right? Like would you expect a developer to be in this application? Or is it more like the enterprise architect is gonna lay out, I mean, so I used to work for a consulting company, we had this product, right? Consulting companies have weird products sometimes called Surveyor and where we would come in after we say, you know, after management consultant came in and told you some crazy plan, we would come in and tell you to actually implement that plan, you know, with your technology choices. And so we would lay out, you know, multi-year project plans of portfolios or whatever. And so is that kind of what you're, that's what you're kind of envisioning here, right? It's like to get, to be able to keep track of everything that's in place so that you can kind of say, okay, we're gonna do some modernization here, we're gonna retire this thing there. And then if it was modernization, then you would hand that thing off to some sort of development team to go and actually do that work, right? Exactly, exactly. The idea for the inventory, actually it's something that has been requested from the field for years. I myself have requested having something like that. I come from red consulting. I've been doing this kind of migration projects for almost five years out there, you know, larger scale migration projects towards different application servers or OpenShift adoption projects. So one of the key things here is to be able to govern the landscape and the whole application portfolio. So having these, let's say central half that is integrated with the analysis and the assessment tools, allows the consultants in the field and, you know, any architect that is leading this kind of transformation process to have a holistic view of the whole application portfolio that needs to be analyzed and migrated. So with this holistic view, you're able to identify risks as early as possible to get details that span across the whole portfolio in order to design a migration process as realistic as possible and to make sure that it can be reliable and predictable enough for the whole migration project. So that's it for the, let's say enterprise architect role for the actual migrators or developers that perform the migration once everything has been assessed and analyzed. Well, we have some IDE plugins coming from the windup code base that integrate with the IDE and basically tell you exactly what needs to be changed at code level. So that's one thing. But also in the future, we're thinking about providing some integration from this application inventory, maybe with some tools like JIRA or something like that. These kind of tools that developers are more used to interact with in order to, let's say you create some sort of migration wave or a screen within the application inventory and then you are able to export it as issues on JIRA for developers to take them and perform the actual changes. So that's something we want to do. So one thing at least that occurred to me from kind of a developer or even like kind of not the enterprise level architect or kind of the systems architects that are working with these migrations, this might also be a good discovery tool and in some ways also the anti-discovery tool in that kind of using that DB2 example, DB2 is set to be sunsetted, right? We want to get off that. We want to migrate to something else, I don't know. And so what it might be useful also to be able to say is like as a system architect when I'm trying to plan that new migration of the new application is to know, okay, this service doesn't exist yet, but it's planned. So go make sure you work with that team so don't you develop the same thing or this service is available and is meant for long-term usage. Oh, and this service, even though you discovered it, it looks like it's perfect for what you want, it's set to be sunsetted. So don't go building it into your new application. So that was kind of one thought I've had, I've seen this application once or twice but it might be nice if you could use it for some level of discovery around how you're going to do the application migration more than just because it's always a fine line of whether you're going to do what is often referred to as like a lifted shift migration or you're actually going to do some level of application modernization or whatever you want to call that. But basically getting your environment more holistic over time or simplified really always helps and so having using a tool like that might be an interesting way to be able to see what services are available that I could take advantage of so that I could do more like a modernization app or modernization activity on this application rather than the planned lift and shift with either the same amount of effort. So that was one thing that occurred to me. Yeah, well, we're not planning to you know, we're not thinking about making this application inventory become some sort of CMDB. You know, the idea is to keep it focused on the migration process. For the moment, it is not only applications what we load up into the application inventory, we can also load up some dependencies and yeah, the concept of the dependencies between applications or applications with a third party kind of middleware or database or something like that. That concept already exists on the tool and what we intend to do with Pathfinder is you know, have some sort of guided process to help users decide what they want to do regarding the well-known six hours for application migration strategy. Yeah, do you want to elaborate a little bit more on that? Like so, you know, kind of maybe. Well, just demonstrating just what the questionnaire looks like Ramon and the... Yeah, I think so, yes. Yeah, yeah, so I'll just share my screen again. Share my screen. Right, okay then. So I'll just change the section criteria. I'll just clear the filters. And we've got a business service that's been set up that's got a number of applications related to motor insurance, this is just test data. Yeah, essentially with each application, what we expect users to do is group applications together and then those that have shared common characteristics complete a questionnaire that is representative of them all and then if that informs the risks associated with trying to migrate these applications across to enterprise Kubernetes. So to go into the questionnaire process, you just need to select any individual application and click the assess button. And it takes you into this assessment process and basically there's a first page which allows you to define the group of people who have been involved in the assessment. So it could be individual stakeholders that you want to add to the actual assessment process and all could be stakeholder groups that you've predefined. So I've got this predefined stakeholder group that's a collection of stakeholders. Stakeholders that represents the audit security and project office, for example. Yeah, I've got another group of stakeholders that are linked to the motor sewer business function. Yeah, and then once you've selected who's involved in the assessment process and this just gives you an audit trail of who was leading the engagement and who's been contributing the engagement. So retrospectively you can be sure that the right people have attended and made their contribution to the assessment process. Then you go through a series of questions that are grouped into categories that allow you to drill into the characteristics of these applications and how they're supported and how frequently they're deployed, all these sorts of things. And most of these questions are multiple choice answers and the way the question data is configured is depending upon the answers, risks will be raised. There'll be some answers that are sort of green category, some that are amber category, some that are red category. And these sort of rears flags about the suitability of these applications for containerization. As a consequence of driving one of these engagements, what will happen is the person leading the engagement will naturally solicit information out of the group of attendees who are contributing to the process and they're about to get information that can't be just encapsulated by a single radio button on a single question. So for every single questionnaire, there's every single section of the questionnaire, there's additional notes or comments section and you'd expect this to be populated with information that is solicited from the conversation. So ultimately, one of the questions that have been answered is essentially the assessment questionnaire is complete and the users must go into the sort of next step on the process, which is the review process. So if I just go back to the application imagery and I choose the holier one again, sorry, and just click on the review button. What you will see here is essentially the outcome of the assessment. This pie chart represents the responses in terms of the amount of risks and the type of risks that have been raised as a result of the questionnaire. As you can see, this holier backend application seems to be a very good candidate for migrating onto Kubernetes. And then what the users are expected to do based upon the risks that have been surfaced and these are the sort of risks that get surfaced from the question answers combinations is make a decision about this application in terms of what the proposed action is, how much works involved, and also categorize the application in terms of its business criticality and the priority of migrating this particular application compared to other applications in the portfolio. Yeah, it also gives the users the ability to add some free format comments, which sort of helps provide some qualification as to why some of these particular values have been chosen from the dropdown lists. So that'll make sense. And then once all of these things have been, once the review's been complete, again, I'll just choose a subset of the applications. So I'll go back to those more sure ones. Yeah, you can click on the reports function and the reports function gives you a nice overview of those applications that satisfy this filter criteria in terms of their suitability, which one's a low, medium and high risk. It gives you a nice type of view of those applications in terms of their criticality, their priority, the degree of confidence that you have that they're suitable for rehosting or refactoring or whatever. And also a nice adoption plan, which takes into consideration both the application dependencies. So for example, this particular application here, the front end for the core generator depends upon your authentication application. So it's like a gunshot essentially. And the amount of the bars are reflective of how much effort was allocated to each application, which is one of those fields that you populate during the review process. And then these things are ordered in business priority with dependencies taken into consideration. So it starts to give the user a high level plan of which applications to migrate and which order and realistically how much time they need to ring fence for each particular application. Does that make sense? Yeah, and so specifically around the dependencies part of it, what I'm curious about is like, does the risk factor or effort factor take into account as part of kind of its input, the number of dependencies? Does that kind of increase or low? Cause I would think that that would increase its score, like as in make it harder to migrate than... Yeah, there are some specific questions around the dependencies. And obviously they will influence whether it's high risk, low risk, medium risk. Isn't there any logic in the application at the moment that is specific about counting the number of dependencies and then elevating or decreasing the risk based upon that count? But it is a good idea, Langdon. So I think I'll put that in the background. Yeah, like I said, I've done a bunch of these scary activities, but it kind of a related question. So you were mentioning kind of AI, like around natural language or whatever, is that applied here? Is this where that proposed, not yet, obviously, but... Not yet. Not yet. Yeah, that's it. We were starting to entertain the idea with our IBM research colleagues. So this is something that came from a meeting a couple of days ago. So since they have this machine learning... I have all the answers now already. It's been like 24 hours. So yeah, no, the thing is, since they are building this machine learning engine for the container recommendations, we are thinking about using this same kind of engine to provide recommendations on whether to rehost through the platform or refactor your application. So we want to have some AI support here for recommendations about which path to follow. For the moment, it is the user after gathering all the different risks that have been raised after doing the questionnaire. It is the user, the one that decides which path to follow. In the future, we want the system to be able to provide recommendations based on both the output of the questionnaire and previous outputs and decisions made by the user or other users within the same migration project. I would actually, yeah. So basically the reason I was thinking about it was two things. One was, as Phil, as you were kind of explaining, it's like, as you're... And I laughed when you said this, it's like when you're especially coming in as consultant. So you come in, right? And you're having this portfolio conversation, right? And like I have literally sat outside someone's office door for multiple days, just sitting there, waiting for them to have time to meet with me. You know, it's very clear how many hundreds of dollars in an hour they're paying for me, just sit there outside his office doing nothing. But so sometimes it's like pulling teeth. But the comments section that you were kind of saying, I think that's a really important part, right? Because you like, and maybe even worth kind of, even a building upon where you often capture a lot of information there that isn't strictly relevant all the time immediately or might be later or whatever. What I was just thinking is that applying some natural language, maybe not real soon because you need a set of data to kind of work from, but applying some natural language understanding of the data in the comments section may also be able to help inform the application migration. I mean, this is an extreme example, but in the comments section, you're putting in a team is wildly geographically diverse, or team is all just started on the project two weeks ago. Things like that, like things that you may not be capturing or don't know to capture yet, right? In the questionnaire, might be interesting to kind of start to look at from a natural language perspective. Basically like a joke around about using AI to do flame war detection, right? In various open source communities. Same kind of idea, right? Is you can get sentiment, those things out of it. The other question I had too was, oh, no, that's, Phil, Phil, we should bring land on to the team. Do you have some? I think we should, yeah. Yeah. My team. We'll have it. We'll have it. Come on. Yeah. We need it as illustriousness here. Yeah, that's right. What are you doing next week? Do you have some free slots next week? I think we need to talk. Yeah, so yeah, I have, like I said, I used to do a lot of this stuff for a long time and you can see why I joined Red Hat to get out of, you know, get away from that nightmare. So yeah, so I just, I think there's a lot of opportunity there. So one of the things that I find a little bit confusing is, okay, so I know the questionnaire part is Pathfinder is what's the overview app called? Is that tackle or is that also Pathfinder? Well, the glue that holds all together is the application inventory. You know ultimately what we need to do is, so we've got Pathfinder integrated in the application inventory and essentially we've got the controls, Pathfinder, the application inventory of three different microservices, yeah. And what we need to do, we've got the migration toolkit for applications which is upstream as the windup project, which does the automated code analysis and we need to bring that to the party so that it's integrated to the application inventory. So the idea of being that's within the application inventory, you could provide the path to your enterprise archive files or the GitHub repository from which the application is built and then you click and then an analysis button and windup would run behind the scenes, analyze your source code and generate a collection of reports that raise the awareness of any migration issues for whatever target you're working towards, yeah. So sorry, just because I gloss over it because I know what it is, but can you explain for the audience what windup is? Yeah, yeah. So, okay, I can do a brief demonstration if that's gonna be helpful. Oh, sure, yeah. I mean, I think pictures definitely help, yeah. Yeah, yeah, okay. So let me share myself. So while you're setting everything up, Phil, I would say windup or MTA, which is the downstream distribution for windup is an automated analysis tool that can use both source code and binaries. And the idea behind this tool is to measure the gap that you have from a certain runtime towards a certain target runtime. So for example, it was built back in the day to help on migration from different application servers, web logic, web sphere towards EAB. That was the original, I would say purpose for the tool, but now we have added a lot of rules that help with a containerization, modernization, everything related to 12-factor applications, things like a Spring Boot to Quarkus. So we are basically reusing these rules, engine and this analysis engine with the purpose of application modernization as well. Okay. And so at least, you know, so this is primarily a Java-focused application? For the moment, it is, okay. Yeah, yeah. I mean, it can analyze any text file to be truthful. So config files and things of that nature, XML files that provide configuration, it doesn't necessarily just have to be Java class files. But it is, as Ramon mentioned, it's origins were all about getting workloads on EAP from the middleware providers or and what it still does today very well is upgrades from one version of EAP to another version of EAP, yeah. Right, right, okay, right. Yeah, okay. So essentially, hopefully you can see my screen again. This is the web UI for migration to EAP applications. We have a web console. We've got a Maven plugin. It's available as an operator on OpenShift. And we've got a CLI obviously and we've got numerous IDA plugins that support CodeReady Studio, Eclipse, Eclipse Share, VS Code, and in July, we'll also have a plugin released for the IntelliJ IDE. So it's really, can be consumed in many different ways, but the web UI is probably the easiest way to demonstrate it and for people to understand the concepts. So we start by creating a thing called a project. And a project is just a way of grouping applications together that you're going to analyze or migrating to the same target, yeah. So I'll call this one demo two, yeah. You can give it a description if you want to, but you don't necessarily have to. The next thing you have to do is choose up which applications or directly contain applications that you want to analyze. I've obviously got some test data and I'll pick a couple of sample applications that we can use when demonstrating. Click those three there, yeah. So that's uploaded three applications that are going to be decompiled and then analyzed. And then the next thing which is probably the most appealing of the screens is all about choosing the target, yeah. So what you can choose is what collections of rules you want to consider for execution when running against these applications. The one that's chosen by default is EAP7. And another collection of rules is for containerization which contain the cloud factor app considerations. You click on this one which we look for sort of window type file paths, windows type file paths in the application. Kind of line it in. True to the point. Pardon me? Wrong line endings. Like hard returns, hard returns. Yeah, yeah. And there's other ones as well. So on with JDK, on with JDK, rules for Springboard to corpus, rules for making sure you've got the right version of Springboard if you're running Springboard applications and read that one times. And there's other targets as well, but these are just the ones that have probably got the most highest profile and we want to really put in the face of the users, ones that they're generally going to be selecting, yeah. When you click the next button, what it does, it comes, it runs behind the scenes, identifies all of the packages within the applications. There are many standard libraries that are provided that we are aware of. And you would want to analyze those because really what you're interested in is just the business logic, the application logic. You're not interested in analyzing the libraries that it's being built upon. So you can interact and see which of these packages are not business logic, but by default, the lineup application generally gets it right and you don't have to mess with this interaction at all. Then there's some sort of advanced options about if you want to write some rules yourself because the application is a rules-based engine looking for characteristics within code. And when those characteristics are discovered, it generates issues that provide content for the report. There's lots and lots of cool rules delivered as part of the application, but customers can add their own rules. So it's entirely extensible. Likewise, there's a set of labels which I'll demonstrate in a second which are valued at the reports and you can add additional labels yourself as well. There's a collection of advanced options which we'll load hopefully in a second, which allow you to export the results to CSV and some sort of edge case type features such as mavenizing your project or keep your work directly, and all that kind of stuff. But all this... I have to say for us consultants on the field and architects on the field, this mavenize option has saved our lives many times because you still find lots of customers out there, you think, and with no dependency management. You almost want a maven target, like Corkus, right? Like, you know, or like the opening blocks, like having maven is just a straight up target might not be a bad... Well, yeah, it makes sense. So what did you say you were doing next week? Well, no. Yeah, exactly. So actually, I had a couple of questions here that I was kind of wondering about. It's like, so where do I find the rules that are triggered or the collection of rules, I don't know, quite the right word, for Corkus migration, let's say? Okay, yeah. So we've got a... Yeah, within the GitHub repository for wind-up, there's one rule sets for it and it's structured basically top and then source. So let me just show what's on my... So this is my clone of the GitHub repository for wind-up rule sets. And the way it's structured is basically wind-up rule sets, rules reviewed, target, so AP7, and then the subfolder is the source, yeah? So for any rules that migrate from AP7.2 to the latest version of AP, will be found within this folder, yeah? And these are the list of rules, yeah? So once you understand how the rules are structured, they're quite easy to navigate to. But when you generate the reports, what you can also see is the issues that are fired, you can trace it back to the rule that fired it. So I'll demonstrate that in a second, actually. So click on next, you get a confirmation button that will just give you a summarization of what you're going to be doing. So these are the applications you've set to analyze, these are the targets you're going to analyze with. These are the packages that will get analyzed, yeah? And you haven't chosen any advanced options. And then if you click save and run, that will kick off the analysis. So it'll take me to another screen that gives me a list, a process that you can follow and understand where we are with the analysis. But just like any good demonstration, what I will do is I will click to every other pre-prepared one. And this icon here is the set of reports. Because I didn't break enough jokes for a full analysis. So, yeah, that was great. And then you start actually seeing the reports themselves. And these labels here allow you to see which technologies are supported or unsupported or embeddable within each particular runtime. So whether it's JDA, VSG or CEP, as I say, this gives you a key to give you an understanding of which of those technology tags are supported by those runtimes. And you can create your own versions of these target labels, which was one of the options in the configuration. The thing that really is important is the issues list. If you have a look at some of these issues, you can click on that or tell you which file issue is located in. You get a nice lengthy description of what needs to change with some reference materials. And if you wanted to drill into the detail of why this particular issue has been generated, what's the rule behind it, click on the show rule, and actually that's not the most readable rules, but basically there's a one condition that says, when you see a reference to this particular method, within method call, then what you need to do is basically generate an issue. And the issue has a category which indicates severity, how much works involved, gives a title and some related information that you saw on the last report. Yeah. Right. I would add that having these kind of tools that provide so much low level detail about the application portfolio is what enables, you know, has enabled us in the past to successfully deliver this kind of largest scale migration project. So you are able to have with this kind of level of detail done in an automated fashion, you are somehow enabled to have a holistic view of the whole portfolio. And you are able to identify all the technical risks at a very early stage in order to design a migration methodology around, you know, how to solve these risks during the migration process. Yeah. Well, and I particularly like that you have, like one of the ones that I think is, you know, even though I'm, you know, biased here, like somewhat biased, but is the Porcus one? Because like in so many ways, Porcus is not like running a normal Java application except that it is. So knowing where those complexities are gonna lie is really important. And I don't think it's obvious, particularly when you're kind of at the outset, if you're thinking about, hey, I wanna migrate this thing to Porcus, because like, you know, if you're a senior software developer, I think if you really think about it, you're like, oh, of course that's why this problem lies in trying to run this particular Java application in Porcus, but you don't see it at all, or at least I didn't, right? When you're looking at the Java application and then you're thinking about, oh, I wanna move this over to Porcus. You know, so the one that caught me really off guard was there's an in-memory database called H2. And if you're running that as a single container, Porcus-enabled element, it won't work because that H2 database doesn't have any place to write to disk. So it's got these, you know, there's these weird things where it's like, oh, if I really got through it, you know, I could probably figure it out, but you know, that one's really interesting. I think containerization is probably in the same vein, but I have a lot more experience with moving things to containers. So maybe that's why it's less obvious. We have a quasi-question about the rules configuration. Carlos says, if I'm not wrong, the Porcus rules can be viewed through the UI also using the menu rules configuration. Is that accurate a statement? The menu rules? No, I'm seeing the rules for, like can you see a rules for a target platform there's a UI element. Phil, does that ring a bell? Is he still here? Yes, he's entirely right. You can view the rules. Yes, well then Carlos, I'll please you pick us up on that one. Here's a way of viewing all the rules sets that are going to be used in the analysis prior to actually kicking off the analysis. So you don't have to have the reports to see the rules via the UI. That is very true. That's good. And I also wanted to say, we've been talking about what Tockel is today, but there are a lot of exciting things coming up, mainly from our IBM research colleagues, but also on our side in terms of integration. So right now what we are aiming at is to have this fully integrated experience from the application inventory to being able for being able to assess the portfolio which is something that we already have with Pathfinder. And now we're working on the actual integration between wind-up and the application inventory. So we plan to, let's say, migrate wind-up under the Tockel umbrella and have everything fully integrated. And since wind-up is such a great analysis tool, the IBM team has developed a couple of analysis related tools. One of them for example is Diva which is able to analyze the data layer from an application and basically detect all the storage dependencies an application might have and detect things like distributed transactions and things like that and map them out and create graphs based on the dependencies that could exist between applications with the same database, transaction spanning across several databases or across several different applications or middleware elements. That's what Diva is. It is a standalone tool now. It has already been published under Tockel. What we're trying to do is bring Diva as a first-class citizen within the analysis flow for wind-up. The same goes for Tockel configuration discovery which is configuration files are usually something that we forget about in this kind of migration project. So we change the technology and then we struggle to change the configuration model between the two technologies. This kind of tool what tries to do is to translate configuration files from one format to the other to make them compatible to the target technology. That's something that we also want to integrate into the whole wind-up analysis and transformation flow. And I think the first MVP for TCD has been made available today on the Tockel repositories. And we also talked about the application containerization advisor, the one that with natural language suggests the container image that you should be using based on the technology stack. And the last one, which for me is one of the most exciting ones is the one related to test-driven migration. So basically what this tool does is that it analyzed the source code of your application and is able to generate a test suite that creates some sort of functional profile of how your application behaves. So one of the key problems that we have when we are migrating applications and we're performing source code changes is somehow being able to warranty that the application behaves the same. So this is an automated way of creating a functional profile of how the application behaved before the migration and then use it against the migrated version to see if it's still doing the same thing. That I think would be ridiculous. I mean, like one of the things I think people don't capture in a lot of this stuff, right? Is that application, even creation, much less modernization by modernization in particular, one of the biggest gating factors is usually testing like QV, QA and that resource, whether it be automated, whether it be people, whether it be whatever, but that resource being so constrained that they can't take in more applications. And so as a result, even though you might have the developers to do the work, you might even have the code all done. You don't have anybody to actually validate that this thing does what it says it's gonna do. I saw countless applications that could not move off of the mainframes they were on because no one knew how to replicate the business logic that was encoded in the Cobalt. Which is a common problem, yeah. And what I particularly like about that is that it's often it banks and insurance companies, but different problem. So one of the things I was kind of thinking when you were kind of giving that list of these kind of application migration, assessment kind of tools or whatever is that it might be really useful if Pathfinder had a standard API or a consumption model or whatever because what I have also seen over the years is that almost everybody has written one of these as well in their specific little world, in their specific little company or even really large company. And they have this thing that they trust and that it might be super useful if not only could I plug wind up into it, but I could also plug in my custom homegrown thing if I did a little ETL on its output to get into the right format. And kind of setting a standard for that format means that then you can have lots and lots of plugins to it. I don't know actually where this is kind of like reaching out of my space, but like when you look at, I just blanked on the name of that company, but like we use a product in the Linux world that's like a paid product to do security assessment as well as code assessment. And I kind of wonder if there's like some sort of standard out there that all of these analysis tools could kind of like all filter down to from an output perspective so that you can surface all of them in something like Pathfinder, which then you can extrapolate even more so to say, okay, I got Pathfinder, which is telling me, this application wants to do this thing, but then you can also slot in different pieces of the application, get a better understanding of what the effort level is because now you've actually done analysis on the actual individual application. And then when the C-Sharp version of windup comes along, then you have a really easy way to plug it in without having to make windup understand C-Sharp. Yeah, absolutely. We're still trying to figure out the integration patterns to make it easier for new projects to come on board to the whole Taco umbrella. I would say we are trying to bring as many vendors, GSIs, customers out there to share their knowledge and to contribute to this. And we are very open to integrate any tools that any GSIs out there might have developed in the past to address these kinds of problems. So yeah, we are definitely trying to figure out how to create some sort of generic enough integration layer for all the tools to ease the integration of any external tool in the future. Right, right, yeah. No, that makes a lot of sense. And so I was gonna ask a little bit more questions, but I wanna take a pause for a second because it's about that time of the show where we are running up on 10 o'clock. So I would like to pause for a moment and share some sweet, sweet internet points because everybody loves the internet points. All right, so for the sake of our guests, we like to, sorry, we have a concept on the show that we call sweet internet points. And every time we do an episode, we like to share who our leaders are in submitting for internet points. You can get them a bunch of different ways by watching episodes and submitting the code that you see during the show. You can re-watch old episodes and look for the code there and you can, you know, and submit those, like the only word I can think of is post-mortem, which is definitely not the word I mean. Not the right word there, yeah. Not the right word. But, you know, any episode from the past, you can also submit issues about shows you might like to see or pull requests on super bugs I have when we do shows that are more about coding. So without further ado, we have today's leaderboard with Norenda at 5,800 points and Netherland-Tacom at 5,500 points. I don't know how much of the entertainment here is actually me destroying people's nicks on the internet. But I just dropped the codes that are on the screen right now, but I also just dropped them into the chat. Hopefully they're the same. Yes, they are. And then Noah Frickson and Joe Fuzz were both holding stable at 4,000 and 2,300 respectively. We need to get them back. We haven't seen them in a while. So they're not collecting quotes. And then Detective Conan Kudo has missed and I actually talked to him in another channel and missed the last episode, but hopefully he's back today. We'll see. He was back today. He had to run, but he'll get points later, he said. Stupid work. And then BaconFork is definitely catching up on the detective. So we need a detective to keep him under those points, maybe go back and watch the episodes they missed so they can... Oh yeah, like in the case you missed an episode they're all on YouTube. Yes, yes, they are all on YouTube. And so we're gonna do the shout out, click like and subscribe because for the corporate overlords, not really, it's because we also like the props of y'all watching the show. So thanks so much everybody for always being here. And to our guests, I just wanted to ask kind of one more question, which is that it sounds like you have a ton of plans. Is there a roadmap somewhere? Is there a place where I can go and see where you're hoping to go or a way I can say, oh, I want that versus this? Where is there a way I can kind of try to help participate in where you're going with the project? Absolutely. We're trying to enforce the open source spirit as much as possible in the project. So basically what we're doing is doing our planning sessions in the open. And that goes for the whole conveyor community and for all the tools inside the community. So a couple of weeks ago we did the June planning session and you can find the output from that session and the roadmap that we presented on the conveyor YouTube channel. So the whole session is there. You can see all the new stuff that we presented and the whole roadmap for the upcoming months. And we will be having another roadmap presentation session on Friday. And I think this is going to be streamed by OpenShift TV as well. So, you know. Cool, awesome. I like that channel. I've heard of it before. Yeah, it's a good channel out here. The, cool. So, and then I assume as part of the landing, kind of roadmap of the videos or whatever, like where do I actually go participate in the community itself in a sense? So I think you mentioned that there's a Slack channel. Yeah, well, to get all the news, we have a website which is conveyor.io and we also have a Slack channel in the Kubernetes Slack, which is conveyor on conveyor. And there we are. Okay, cool. I mean, so one of those things, the reason I like to ask that question, even if it's findable is because I may discover a conveyor on IRC that people occasionally take a look at, right? But where I wanna know, you know, what I want our audience to know, right? It's like, where should I go where people actually are? Where your team generally communicates with, you know, yourselves or people who are tangential to the team? Because, you know, now that we have six billion different chat server types, it can be a little challenging. That will be a Slack for sure. You can find Phil, you can find myself, the whole engineering team, not only the Red Hat engineering team, but also the people from IBM Research are there as well. So if you have any kind of doubts or problems with the technology, we're all there and we're happy to help everyone. Right. And also if you have any kind of idea you would like to share with the community, maybe that's the place to engage with us or getting into the conveyor.io website and signing up for the mail list. Those are the usual communication channels with the community and also with the tackle team. Awesome, awesome. Chris, were there any other questions from the chat? There's a question about OKD. I'm gonna ask, Ahmed, please check in with the, there's an OKD Slack. I can find it for you, hopefully, maybe. I thought I was in it, but I'm not. But yeah, there's an OKD Slack that you can go join and ask your questions about Net Core and OKD, but I think it would be supported. I don't know why it wouldn't be because we can do it on Net Core and OpenShift, right? Like, yeah. Yeah, no, actually, I think I talked to somebody about it. So yeah, there's, I mean, as far as what's running in the container, it doesn't really matter, you know, like, for the most part, you can just kind of run it. So as long as you're using a base image that runs.net core, then it should work fine. But yeah, you can, I was looking forward to, were you looking for the OpenShift, like, Commons Slack, or were you looking for the Kubernetes or the CMCF? The Commons Slack. Let me. Oh, I think I got it, too. Hang on. It's in here somewhere if I can find it. Oh, boy. OK, well, that might be a bridge too far, maybe, right? Yeah, I am. I jokingly refer to the fact that I did.net for a long time. And I actually, there's two Red Hat employees. One is Gunnar Helixson and one is Davidettes. And they're pretty well known within Red Hat, but they also, they've done a podcast together for years. And it's really good a lot of time. But I was on it a couple of times. And one of the times I was talking about.net and how I really, like, see sharp as a language. And basically it played me out. And it was really quite amusing because this is before .net was available on Linux. But I was basically at the time lamenting the fact that I can't see sharp on Linux anymore. But now I can. So, all right. And also regarding .net, I have to say, it is not in our, I would say, upcoming roadmap. But I know the move to Cube guys are working on that. So maybe we could leverage the kind of technology they're using within Tackle to help on those migrations as well. So that's something we're thinking. We're definitely thinking about. Cool. Yeah. I mean, it's just the major reason I'm bringing it up is right. It's like, you know, the two languages are, you know, C sharp and Java neck and neck in most enterprises, right? So, you know, you're going to have, you're going to see a lot of C sharp. You're also in many enterprise going to have both, right? And, you know, being able to have, you know, a tool chain that migrates either is probably going to be helpful. But I think that's, should we wrap the show? I mean, is there anything else you want to bring up? Yeah. Anybody else want to say anything here? Make sure we got all our eyes dotted and teeth crossed before we get up there. Ducks. Ducks in a row. That kind of thing. No, good. All right. Well, thank you everyone for joining us. This has been very informative. I've learned some things. So I'm always happy about that. Up next on the channel is the ask an open shift admin office hours. Today we're discussing SSL certificates on open shifts. We kind of had some issues with that topic getting streamed the other day. So we're going to tackle it. No pun intended again. So yeah, stick around another hour or so. It will be live. And yeah, your, your.net container should work. And open or OKD. So yeah. All right, folks. Thank you very much for joining. Thank you, Phil. Thank you, Ramon. Thank you, Langdon, as always. Stay tuned. Join us on our discord. You can find us all there. And when in doubt, check out the calendar to see what's coming up next. And thanks, everybody. Yeah, without further ado, thank you all. Thanks for having us. Thank you.