 Well, hello everybody. Sorry for that slight delay as I said on Twitter. This was Actually a Monday masquerading as a Tuesday at least in the US with Memorial Day yesterday But today this is the you know cube by example insider show. I'm Langdon White. It's been a while I'm a faculty member over at Boston University and I host this show And we talked to insiders in the Kubernetes community. That's all that's was a lot to say fast and Try to give you some insights about what's going on in that community so that You can kind of get a sense of what will be landing soon in the Kubernetes world Because we find that with the open source projects in particular It's very helpful if we can get a sense of what people are doing Versus what they're thinking they're going to do So that that gives a better sense of what might be happening soon So I'd like to welcome my co-host today Josh Burkus and if you want to quickly introduce yourself That would be great Howdy everybody. I'm Josh Burkus. I work in the red hat open source practice office Where I support Kubernetes and other cloud-native projects And so I'm a little familiar with today's guests So let's actually have them introduce themselves Ramon you want to go first? Hey, everyone. I'm Ramon Ramanathan here product manager for the migration toolkit for applications in red hat and also a maintainer within the conveyor community John Hey, John Matthews based out of Raleigh, North Carolina Red Hatter. I Hand a lot of engineering management things for conveyor Okay, and Savita Hey everyone, my name is Savita Raghunathan and I am a senior software engineer at Red Hat and I am one of the conveyor maintenance And I also contribute to Kubernetes and I'm happy to be here as a guest again Yes, Savita kind of goes back and forth depends on you know, sometimes she's a guest sometimes she's a host the other one that is that keeps throwing me especially in the lead up to this Show is John Matthews is also the name of the department head of my computer science program when I was in college And so it's I keep seeing it and saying wait what? So so it's kind of random But can you tell us a little bit? You know just for kind of some context. What is conveyor exactly? I Guess I can take this one. Well conveyor is a toolkit Community it got started as a community. It was an umbrella community in which we wanted to focus on helping Organizations with the onboarding of applications to Kubernetes. So helping them Not only bring their legacy traditional workloads into Kubernetes, but also help them Leverage the capabilities of Kubernetes as a as a platform. That's the elevator pitch for for conveyor, right, right? yeah, so Conveyor is actually pretty well covered by keep by example. There's actually a learning path And we'll probably throw the link in the chat at some point soon But that you know, there's a lot of kind of material there but I guess one of the things that I Kind of like to try to understand is like, okay What what does that mean? Like so I have you know, you know I've kind of recently installed Kubernetes, you know or something right and I have now this roadmap that some consulting organization put together for me Speaking as a former person who used to do that That has you know a product roadmap and it's all getting magically move over to You know Kubernetes and be everything will be perfect and hunky dory But so so what do you like? What does it mean to try to migrate those applications? Is it I presume it's not a five-second deal? Definitely not so we're not trying to do magic here at least magic doesn't exist at least just yet I mean we have AI in the horizon We'll see in some years, but right now. I don't think there is any any magic Well, what we're trying to do with conveyor is to write provide as many insights for these Architects that are leading this type of migration engagements my migration initiatives Given them the uns insights to make informed decisions. So they have something something tangible In which they can base their their their decisions and on the other hand for the developers that are performing the changes in the in the source code that are Adapting those applications to run on the new platform provide some Guidance and some degree of automation as well We're not currently Targeting or focusing on full automation because I don't think something like that exists just yet It's just about providing these insights for the lead practitioners for the lead architects that are steering the migration initiative And then for the ones performing the changes automation and guidance And so how does one decide about like You know like I think one of the fundamental Challenges or you know problems or whatever you want to call it with kind of migrating into Kubernetes is that you know The architecture of an application that kind of lives in a containerized world versus the architecture an application that lives on You know a big iron BM How do like how do we know what to do is that? You know do we need to re-architect the application? Do we need to move it? Do we just kind of port it somehow? What's the like what do we know to do there? Yeah, the idea that we have or the value that we provide with the with the talking with the tooling that we have is that We are able to find anti patterns for containerization in the in the application So we provide very precise data of what will be required for the application to run on Containers and then it's up for the organization for these architects to to you know decide whether they want to re-architect for whatever business Reason or if they want to just port these application To the cloud with with the less amount of changes Required what we're trying to do is you know empower these individuals to make these informed decisions from actual data Coming from the applications we find the anti patterns and it's up to Up to you to decide whether it's worth to re-architect the whole thing Or if you would prefer to just keep it as it is and then focus your resources and your budgeting on on something else So so what are what are some reasons? What what are some of these anti patterns? What are some reasons why somebody? Yeah For example one big thing is configuration management, you know application lifecycle management in general is that it's a You know very interesting topic to take into account when you're containerizing and bringing applications into Kubernetes especially So trying to find anti patterns in configuration management like loading configuration files from the file system all that type of stuff that needs to be Taking into account when you're container containerizing the application when you are bringing it to Kubernetes with the you know the fact or default configuration model that applications will have in Kubernetes with a Config map secrets just having that that information. Maybe there are ways of somehow Fixing this thing or or or you know having a practical solution for this without that many changes Or maybe that's changing something like that will have a huge impact on the application architecture It's just about surfacing all these things for you to make these decisions So you're saying that that like one of the anti patterns is people have stuff set up with Life cycle management with configuration with a puppet, right? And and puppet is deploying applications and changing them in place. What's the problem there? I know the problem would be that for example old-school applications that have application configuration invaded within the application itself or an An old pattern for enterprise architects and sorry enterprise architectures enterprise applications of having application servers deployed in multiple Hosts and then all of them accessing a common NFS with all the configuration files Those things will need to be tweaked somehow for the application to run properly within containers Other things will be for example things like clustering. Maybe your application has some custom clustering Mechanism maybe on the cache level or something like that like using multicast or anything like that that is not Compatible with with with Kubernetes Those are the things there are the things that you would need to take into account when bringing those applications Into into containers also things like my applications is right is writing on the file system because if it used that file System as a means to connect to integrate with other applications across a you know business pipeline or or of some nature That type of stuff. What is that what we try to detect alongside some other? modernization concerns like up, you know upgrading from a certain Observer to another one that is more suitable for the cloud We also cover that that thing because the analysis engine that we use has some sort of tradition on that space as well And we have inherited that with the conveyor project as well Okay, so stuff like for that matter stuff like using a Unix socket is obviously for example for example Yeah, it's the it's funny like the you know We have all these kind of like relatively old-school ways of doing you know kind of various workarounds You know that you can manage inside like a virtual machine And then you you move it into any kind of containerized environment and all sorts of stuff starts falling apart And what I find it really similar to in the old days when there was a lot of migrations kind of to the cloud Right, and there was a lot of security concerns right of you know How do we migrate for our data center to the to the cloud or whatever and they were worried about open? Like network access right and what I was most of the time when I found as a consultant was that oh Hey, by the way once you moved it to the cloud lots of stuff broke because it was so much more secure Then your data center which had 5,000 open ports because people had punched ports through you know For some sort of communication protocol over the years and it had gotten lost and it just kind of kept happening, right? So yeah, I find I find a lot of the when you're looking at these migrations a lot of the times people don't even realize That they're using some of those features that you know are not actually all that secure even on a virtual machine, right? And you know and you kind of find these things when you're trying to migrate well also some practices change right because True if you're assuming all of your applications are going to live on the same physical machine then sharing information via configuration files and Unix actually a more secure approach, right? If only is a physical machine, right? Right. It's just the VMs though. It starts getting messy, right? Yeah, well, no as soon as you move into cloud period, right? You suddenly can't do that anymore You have to find a new way to secure everything right, yeah, right so I want to actually change tax a little bit which is Y'all recently announced Version 0.1 of conveyor And and I wanted to ask What's in version 0.1? You you seem to be starting with an unambitiously low number for this release And so I wanted to one see if you could talk a little bit about that Mm-hmm. I could talk a little bit about some of the context some of it was we're trying to We're trying to set the right precedent Where we don't come out too aggressive have a 1.0 and then all of a sudden we realize in six months The API has had the drastically changed So a lot of this is going out where as Ramon was mentioning we inherited Some technologies bound for several years a project called wind up it does some job analysis and other pieces And we've had a lot of things kind of baked But when it comes down to the platform in the API's We didn't feel confident enough that they weren't going to change at all So we really wanted to come out with a 0.1 So we had some flexibility on the API surface That's essentially what we're doing now as especially we start looking into future integrations There's some work that's going on where IBM research has a lot of cool things that we're looking to leverage and bring in And we're not sure what's the right API surface to do that with So the 0.1 is kind of significant to the community that there's stuff here with APIs. We're still trying to figure out the right surface So this is Do we have new folks from IBM involved in the project now? There's a few efforts that have been going on for probably about two years with IBM research doing the difference Some of it's working on the AI side someone's working to move to cube So it's working a test generation some pieces of taking a monolith and trying to break that out And then use AI to find different pieces a lot of experiments that are going to go into what we now call conveyor ecosystem So there's the conveyor itself which is the platform and then there's a conveyor ecosystem Where it's a lot of experiments that are going through that as they mature will get plugged back into the platform Yeah, I thought I kind of noticed today I don't think I'd seen it before that IBM research is actually a sponsor of the project like directly Which was kind of a interesting, you know to try to use kind of those AI techniques to Approach it I think that kind of leads us a little bit into kind of a little bit the structure of conveyor Right is that it's sort of kind of I'm gonna kind of put this loosely But it's sort of kind of like a an overall application, right? Which then has a bunch of plug-ins that kind of do all the the various types of work Is that an accurate or is that a fair assessment or do you refer to it that way? I know it's I know it's a bunch of different tools and then one kind of you know one to rule them all In case you had a one ring for example Is that makes sense or can you elaborate on it? I? See I Think I can I can do this one so I would say the core of the user experience is what we call the the the hub the conveyor hub and the hub mainly consists on a you know on a Application portfolio centric approach so basically everything emanates from the From the application portfolio for what we call the application inventory and then what we have is a series of atoms that provide additional value and Basically are looking into surfacing information about the application portfolio or maybe enhancing the application portfolio in some way So right now for example, we have the analysis atom that will be able to analyze your application source code and binaries and provide Reports on on what needs to be done and also surface information about the technology stack of each application We have the assessment module that will give you an high-level Overview of all the different concerns that need to be taken into account when you are trying to containerize to modernize your your application It's a questionnaire driven assessment and then we're working on new atoms for the future things like migration waves management things like Test generation. I think John already mentioned that it's it's just keeping everything fully focused on the application Inventory because that's what we are targeting here we're talking targeting and we're working with an application inventory and then Have different blockable pieces that either surface information about that or perform a You know a certain action against the application inventor. I catch it. Yeah, that makes sense So we actually had a question for the audience What kind of you know, what who are you seeing kind of contributing to the project or being involved in the project? That are not IBM and Red Hat like where what other communities are you seeing getting involved? So I thought you want to take this one. Oh sure. Yeah, so we are seeing a lot of engagement from people calling up the project and we recently add few participants to join our community meetings. It's not like as a Company like we are seeing individuals from various companies and consultancies and we also have someone trying to create a customization to the existing assessment module right now and I think they would be presenting in one of the community meetings soon So we are seeing engagement and we are hoping that There are some collaborations that's happening with other companies still Regarding the contribution of Language analyzer. So our language analyzer is a plug-in the new one that we are developing is a plug-in So you can bring in your own languages and plug into the analyzer that we have and that will provide you support for the language If you're plugging into for example, if you have a Dothnit or a Python whatever and if that has a language over and that can be plugged into our analyzer engine and that gives you All the details that you are like and it is also powered through rules So you need to have some rules for that engine to work really well So we are seeing engagement for all those things from other places other companies for example I know I am just gonna call out on Ramon and just say that I know he was talking with some folks in Microsoft regarding that Sorry, I'm just throwing you into the bus right now And Yeah Well, the thing is that yeah, Microsoft has been contributing to the rules side of things in the analyzer So they've been contributing to to these engine that we you know the former engine that the one that we inherited They have been contributing rules for for For Azure several Azure flavors We would love to have more involvement on their side, but we're just working on on that. We are We are shooting at everyone so that the one thing to just to clarify Conveyor has become a CNCF sandbox project. It's definitely not a marketing gimmick from Red Hat and IBM We want this thing to become an a true open source community And what we're trying to do is engage all vendors GSIs and organizations out there We're still a young project. I mean, we we we became sandbox. I think In October or something like that we announced that and maybe we got the certification before that But we were able to announce that in October November. So it's taking time before There were some other Benders that were saying hey this thing you are doing guys. He's a very interesting but since it's still your thing We are kind of frenemies here So we will better, you know, we will prefer to wait until this thing is owned by the CNCF So it becomes, you know, something community driven and we ensure that it's not a red hat thing It I can assure you it isn't Anymore and that we're trying to do is basically define the standards for Kubernetes adoption and I want to add a little bit more to what Ramon said and I When I used to be a platform engineer and there was no one project that actually helps. So I did a lot of Java modernization. I help people do the Java modernization and there was no easy way either you rewrite the application the developers always busy So this time at KubeCon, right? KubeCon and EU at Amsterdam. We had a kiosk for conveyor. Thanks to CNCF for having us there. So There is a project for the end where all the CNCF projects get to have a kiosk and they can talk about the project. This time we did Have a kiosk and we were able to spread awareness about the project and there were so many folks who are actually interested in what we are doing right now They are like they are in need of that and they're like, oh my god. We just Come to know like they are like, oh, this is conveyor This is a new project and every project around that area has something to do with the infrastructure, right? Like app definition, application builder, like infrastructure management, something of that kind and they were like, oh Tell me about your project. We are seeing your demo that is going through the Loop and tell us more about it So it was really cool to see like so many people are going through this moment of where they have the projects applications and they want to modernize it so It was really nice to see that engagement from folks at KubeCon. So we are hoping that the next time we would have like more and more things and engagement and Create more awareness and more people contribute to the project. I Actually remember when when you all were applying to the CNCF. There was some talk about Actually having a project that was going to be Attractive to independent consultants Since those are often involved in migration efforts And that's something I guess we we put that to test that at KubeCon And they were there were many consulting organizations that approached us and some of them were saying hey We're already using this thing. This is so cool. And some of them were like we weren't aware of something like this existing out there We're about to get started as soon as we can so It feels like we are in a space that there wasn't that much I mean, I don't think there are other migration tools out there But they are some they create some degree of locking to a certain vendor I mean each vendor has its own migration tools that are the resulting, you know a scenario creates some degree of locking with with some with some proprietary technologies and What we're trying to do with conveyor is to keep it essay, you know as vendor-free as possible as upstream as possible If you want to use whatever Capabilities of these distribution of Kubernetes so be it but the tool itself will will remain You know generic and on that space and won't get into that. Yeah, we got very good reception from these Consulting organizations GSIs so on and so forth I just want to interrupt while you'd had a question that actually I got something that I was wondering about which is So right now the the application assessment is is question your dream Are there plans to eventually add plugins so that that some of this assessment can be sort of code scan driven? I for various application platforms that is like if somebody's using spring or they're using Django or something else That there can actually be a plug-in that scans the application that tells you everything you're going to need to change Oh, yeah, that's that's what we do in the analysis module. That's exactly what the analysis module is So assessment question or driven. It's a high level. It's it's like the intro discussion With certain stakeholders within the organization to try to understand The application landscape so land we call landscape like the application architecture life cycle management technology stack Processes licensing everything that needs to be taken into account when you are containerizing when you're modernizing the application That's the assessment just to have a rough idea of what will be required And then once you have that idea and you want to dig deeper than you run the analysis and in the end the analysis What it does is that it finds? Antipatterns in the source code that might be preventing the application from running on the target platform that you have selected So we have in this analysis engine is in essence a rules engine and the rules contain those Antipatterns these issues that we're looking for and what we do is run the analysis against the application find issues And that's those issues also come with what we call story points that give you an idea of the Forth required to perform the changes that are depicted on these on these issues So we we go down that the route we go down to the source code level But right now we only support our engine only supports Java applications So we are working on a next generation analysis engine that will be will have a support for Any language out there that has a language server? I was actually gonna comment on that really quickly It's interesting that using the language server and I I think Microsoft invented them. I'm not Yeah, but so, you know, I I I swear they've been such a huge benefit as kind of having this generic way of like better Understanding various languages, you know, yeah visual studio kind of you know was a is a driving force But it's like been all of these different interesting kind of You know methods for for integrating with it where like where does the language server come into play? for conveyor like I kind of you know, I kind of understand loosely but kind of in a little bit more detail In the sense like if I decided I want to sit down and write a Haskell, you know plug-in for conveyor Like and there's a language server other actually no idea if there is but you know, where does it kind of fit in? Is it just kind of one aspect or can I kind of feed a large amount of the analysis into into something tool like that? And yes, that's new work. That's going on right now. We're trying to get a Early release probably in about a month or two We have a happy path that uses this language server protocol and it's work that's been going on for I don't know call about maybe six months somewhere around that The way that we're doing that is we're creating a new component That's called an analyzer LSP think of the analyzer LSP as Ramon's mentioning as a rules engine So you have the rules engineer we create the syntax This is going to be in the animal talks to you about different things is trying to find and then there is a language provider component to it And language provider component. That's the thing that could leverage a language server implementation or something else So we have one for Java, which I believe is placed on Eclipse and then we're looking into one right now for net that would use Omni sharp But if you want to one for Haskell you would take that existing language server and you'd wrap it in a small Little provider and that integrates into our engine And then from that the nice thing about it is you get all the other integrations with it So we have the application inventory, which is essentially the same It's the central thing you'd be interacting with that would have the violations the issues all that kind of information So essentially you take your small component of your language server protocol you would stitch that into the analyzer LSP and then what you're responsible for is how do you do the Changes between what we have is a rule It would an LSP query is and that's essentially the piece that you would write And we were trying to choose that because we didn't want to have to go through and write our own AST parsers and start going into stuff where it would be a huge investment So the theory was we invest most of our time into creating the rules engine We have a small little kind of shimmy layer that could do the translation from a rules engine to LSP syntax And then with that we were hoping we could leverage a lot of little things in the community got to Yeah, so I was kind of so like I think it's really interesting and I like the kind of the plug-in model of You know kind of thinking about some of these problems. I think You know one of the things and maybe this is part of what you're thinking about here, but you know in a sense a language has more Differences right then the language, you know, it's like up the way you approach Developing in Java software right is quite different than the way you approach developing Python software Whether it be, you know kind of even if they have all other aspects are being equal They could be exactly the same goal, right, but you're gonna approach it quite differently If you're doing a Python it then if you're doing it in Java Even if you do multiple compiled languages C sharp versus Java is actually even to have some pretty significant architectural differences If you are deeply into those languages, so I think that's a you know That's it's a really good thing to be careful of I did want to ask a little bit more because I'm a little bit biased but The the kind of AI tooling that you've been messing around with to try to solve for some of these problems Are you looking at like AI tooling around? Identifying how to do a migration or like or something else or like I'm not sure I can talk about that Yeah So this is very early stages Early stages the point that I've been spending probably about the past six weeks Almost all my free time going deep into the space and trying to go through course our asserts and all the kinds of things Going into deep learning the pattern that I think we have available to us is if you throw AI out You just look at what's conveyor doing right now conveyor is going through and helping people to understand issues it is Working on a pattern where once you find issues You get to see those same things repeated in multiple flavors of applications But it's the same problem essentially that people have to scale out When we look at all of that it lends itself to a point that we already have a learn loop built into the system We have folks that are going through and doing the hard piece initially to find out the roles engine other things of anti patterns and Then you get to the point where you scale this out of actually making the source code changes We don't there's essentially a thing here We're the magic of how this works people are starting to do manually when you go through an engagement We have an architect to find the first one starts going through by time it scales out to the developers making changes It's almost repeatable changes that they know about That's the first phase where I think we could tap into the AI I don't think the AI is going to do something of magic where it's going to look at two different kinds of Applications to know immediately what all the right changes are unless someone else trained it and showed what's happening and I think we have an ability to play with this because Built into the system someone's already doing that. We're already going through we're looking at the source code We know what the problems are we're flagging them through the rules engine and then a developer will take that knowledge They make their source code changes coming it back into the repo and they're doing that repetitive thing of very similar problems as they scale out So you go to hundreds of applications That's where I'm looking to see what can we do to leverage that Against some kind of model the next piece that goes into this is I'm fascinated with hugging face Just the ability of all these things you can do with open source models is amazing and then Just seeing what happens on the speed of how the innovations are coming through It feels like there's a piece there that if we can leverage something an open-source side of leveraging his models of hugging face I think I can do something an NLP side to bring it back in so we can find Essentially the changes that are going through can I can we mine that and get extract them out and then be able to apply them automatically? This is all very aggressive and it's very new but that's something that we're starting to look into and see what we can play with One thing you you were kind of saying which I I would kind of I'm not sure if this is quite where you're getting up, but I was kind of thinking about this is that Almost like you know if you can kind of keep conveyor involved Through the full migration process and then can kind of capture what they did when they migrated, right? Is that as a feedback loop, right? 100% yeah, really interesting exactly you totally have and we have that now because when we have that pivotory We have access to all the source code so we're constantly monitoring that so we're going to run our analysis scans We have the rules engine that's built in so we have all those pieces the things that we're lacking is How can we make it even easier for developers to go ahead and apply the changes? Yeah, so sorry quick somebody commented another shout-out to hugging face. We actually use hugging face in as like a recommended tool in a bunch of our ML classes because as well each says, you know, it's kind of democratizing generative AI, you know, so definitely check it out. I threw the link in In the chat in case people are interested, but yeah, I think I Sorry Okay, now, sorry. I was gonna get off the left field because I'm like wait is generative AI actually the right approach for this Um The maybe not right. I mean, but it's kind of it's Although hugging face actually does a lot. It does other kinds of AI besides generate right? Yeah, I think the basic concept what we're doing here is that we're already scan the source code and we have rules engine right now That's essentially static code analysis the second phase of this would be that After somebody has already done the work We have the commits of what they already did their source code changes in the repo Can we take that out and mine it and then start using that as the next phase when somebody goes to do a similar kind of problem that? So it's not going to be something that would be like groundbreaking where it can learn on its own and start recommending things It's never seen it's more of a case if we've already seen these commits that come in Can we learn from that and then apply that towards future ones? There's something I would like to add just before anyone gets nervous Right now. We are not reporting back those insights It's not like we are stealing insight from from your migrations or anything like that Any any organization out there can rest assured that we are not gathering or mining any data from their application source code we're just Figuring things out for the long-term future But we're definitely not there just yet and if something like that were to happen, of course We will need I mean someone agreeing that information to be fed back to to the main model But so far we're not doing anything like that because I know you know security how things are when and how nervous organizations might might be about Some stuff about their applications being revealed to the larger open source community So right now everything is fully secure. There is no telemetry back telling anything about your application portfolio to anyone So Yeah, I was actually going to ask a question that would lead to that answer if you hadn't brought it up But yeah, that is a common concern Yeah, one of things, you know, it is it is actually a security problem To even kind of understand like the layout of the servers in your environment, right? So, you know, we want to be good We want to be really cautious about how things like migrations happen and how you know applications are deployed You know kind of in general even though it is a good piece of shared in you know information You've got to be really kind of cautious about how much detail you get into about that particular instance From a security perspective. I was gonna go a little bit kind of left field here Which you haven't really mentioned but have you or are you? You have a lot of stuff in this space that you need to to do right but have you started to think about some of the more unusual or new aspects of Kind of Kubernetes computing in general but also kind of cloud native computing or a venture of an architectures etc and thought about things like when you're doing these migrations identifying opportunities for things like wasm or serverless functions or You know, so so you're not just kind of moving, you know a website over you might also be saying Hey, you know these back-end components or this API layer may be better suited as a set of serverless functions or You know, this whole UI might be better performant if you You know did it as a wasm? You know design or something. Have you started looking at that stuff yet? There are some things in the future like Service mesh things like that Distributed tracing there are some discussions on that space, but I gotta tell you the Main appetite out there is just what do I do with my legacy portfolio? That might not be the most exciting for for all, you know Techno geeks That like this this kind of stuff, but for large organizations They're making concern is that hey, I have a gazillion applications in here that are running on legacy Infrastructure on legacy servers on legacy technologies that I don't know what to do anymore I have like five different operating models across my my my portfolio I would like to unify things. I maybe I have my greenfield applications already running on on Kubernetes And I'm just experiencing the advantages on that first hand How could I get the same stuff or at least something similar for these other? 800 1000 5000 applications that I have in my my portfolio. So right now what we're doing is basically Satisfying the demand from the field of figuring out What to do with these? Traditional applications are still running on traditional infrastructure Right, that's actually I was gonna say but that actually leads to a follow-up in Langdon's question right here Which is obviously there's gonna be a lot of hey We want to know the minimal changes required to make this run on Kubernetes, right? Are you getting anybody, you know among the users in that sort of thing saying hey, I also want something to recommend Changes to take advantage of the new platform right because obviously if you're doing minimal changes You're not actually gaining anything By moving by moving to Kubernetes except not needing to maintain the old infrastructure The the gain all comes from hey Can I take this thing that say used to be a loop pulling a code loop pulling the socket and Turn it into a listener on an event stream instead, right? You know or take this thing that used to be this custom SFTP call and turn it into an open API interface The is anybody asking about that the stage or they just all too wrapped up in a in a just you know How do we get off of our old? Homebrew cloud foundry, whatever it is platform that they don't want to maintain anymore Yeah, we get requests on those topics from from time to time But right now I gotta tell you that the main focus and the main request that we get is What on earth do I do with this thing that I have with these? Gasillion applications that I have I want to have this unified operating model as we move forward as we progress We're getting some of these requests But we need to bear in mind that the conveyor project itself the engineering team that is currently contributing to that They are experts in building enterprise class tools, but they're not that that much of an expert of Migrating applications, so that's what we're why we want to set up these migration experience user group for people from consulting Organizations from vendors that have been involved in this type of modernization engagements to share raw knowledge That can then be translated into rules that our rules engines can can can use and also set the priorities On what is the next best thing that we need to take care in this migration modernization space? so so far we've discussed things like Service mesh for example, and that kind of relates to what you were saying Josh If you're talking about service mesh Then you have to get rid about things like circuit breaking source code in your application anything related to routing logging Distributed tracing all that stuff needs to go away So this this will be the next step once my application is in here. What should I do next? So with this kind of a scenario sort of migration paths We are trying to tackle the more advanced user those who are still maybe doing microservices or modern distributed applications Maybe on other platforms clouds and foundry maybe and that they want to take advantage of Kubernetes They're thinking about again service mesh or enhancing their distributed tracing I want to go from open tracing to open telemetry all that stuff So we're starting to think about these more advanced users to have something that provides value for For everyone out there in the field, but we need to get priorities from the field and that's what that's why we are Trying to set up this user group and try to involve as many Consulting Bendor even you know and organizations to tell us what are their their their pains in the end Well, and I was gonna say is you know one of the one of the good responses in an open source community Right is hey, we're taking pull requests like what I like about trying to do this experience group, right is You know At least have that group start to focus on how do I make a contribution if I want to do that thing, right? Whatever it is, you know as well as kind of gathering requirements But is there also a way to kind of say hey You know if you are interested in doing these things here's where you can plug them in So that you can you know you can go off and do them yourself and that's totally fine Well, you know, we'll take the pull request But you know that your team isn't focused on that because you're you're focused on the you know the big You know the big 800 pound gorilla problem rather than the you know the smaller monkey problems that are off to the side You know because I I think you may get random contributions there That because people will be interested in solving their individual use case So I really I really like the idea that you're kind of trying to establish an organization You know not organization, but like a group or you know a focus group or discussion section or whatever you want to call it Around that topic. Yeah, I would say someone telling us what they need or how they saw something could be as valuable and Some as some source code some modules some decent that that thing is what so we are building a Platform to somehow automate knowledge and knowledge gathering But we still not need that knowledge in a way that that can be used by this Knowledge automation engine and the knowledge itself. It's the most valuable part of what we're building Right, right Yeah, it's it's really interesting You know, I think this is kind of one of those, you know long running problems of you know You know, we even used to do this development around trying to migrate from like rail six to rail seven, you know There's there's a lot of challenges around doing These migrations and you know, and I think it's dangerous to simplify them into things like lift and shift versus My you know versus rearchitecture versus whatever it's a lot of the time It's kind of per application what you want to really do and if we can kind of give You know the end user the the opportunity to To really actually consider what they want to do for any individual scenario. It makes a lot of difference So I know we have a lot of other things we wanted to talk about I will ask our customary question though because I do want to hit it real quick But Ramon and John because we've heard the answer from Savita But what brought you to open source and or what brought you to Kubernetes in the first place? And but then after that we should probably wrap because we're starting running out of time John I'm gonna I'm gonna hit you first mine's easy I was an undergrad and a professor needed help with something he introduced me to be with clusters And that's the first time I got to install Linux at home And I was amazed by it and then it was just I I fell in love with the whole thing where if something was broken I could try to learn it. Maybe I couldn't fix it but once in a while I got lucky and that just kind of Created a spark inside of me that that was a long time ago, but it stayed. That's awesome And and what about you Ramon? My case I would say it's kind of different I come from a consulting background So I've been more in the enterprise side of things, but I'm working on red hat That's part of our DNA But that's kind of that that aligns kind of with what I was discussing with you before this user group So trying to bring this knowledge that has been laying in the consulting Enterprise side of things and make it an open source asset that others can benefit from I think that's a pretty interesting angle that I haven't seen Exploited in open source before so we're trying to transform that knowledge that has been on the enterprise space for a while Into an open source asset that can be shared across multiple organizations. So I'm in love with the with the open source model Nice I also wanted to point out that I interviewed I think Both of you Ramon and John and maybe somebody else on the level up our Multiple years ago now And so if you want to go see how far the project has come You can go watch that wicked old episode. I was trying to find it I'm pretty sure it's from 2020 right but My be you want to check it out But thank you so much for your time I think, you know, we ran out of time a little bit today from some technical difficulties And I think that means that we're gonna have to have you back You know in a few months or whatever and we're gonna expect all the language server stuff to be done Some of those AI components Of course was some integration And we'll just have this little checklist that we'll expect To for you to show us when you come back Hopefully hopefully yes. Yeah sounds good. Um, okay. Well, thanks everybody Thank you so much. Thanks for having us. Thank you. Bye everyone. Have a good day. Thank you