 Hey everyone, welcome to a conveyor meetup. We're super excited to share this new tool with you guys. Just some housekeeping rules. If you have questions, please put it in the chat. We're gonna get to it at the end of the session that way the presenters have enough time to get through all the material. And if we, and if you have a question that we don't get to, feel free to go to the conveyor Slack channel and just ask there. I'll put a link to that Slack channel in the chat as well. So that way you have access to it. And with that, I'm gonna pass it on to John. Thanks, Jonathan. And thanks everyone for listening today. So I'm John Rafferano. I'm a senior technical staff member at IBM Research. I'm joined by my colleague, Raul Krishna, who is a research staff member. And we're gonna talk about a project that we've been working on in the conveyor community called Data Gravity Insights. So what we're gonna discuss, I'm just gonna talk briefly about kind of the problem we're solving. We're gonna look into just a broad overview of DGI, Data Gravity Insights. And then Raul is gonna take you through a deep dive. Hopefully not too deep, but he's gonna go deep. We're gonna open the hood and show you what's inside because we want you guys to help us to build this, right? This is not all built. He's gonna do a demonstration of DGI and then we'll come back and talk about some future work that the community can help us with. So application modernization, right? Making little ones out of big ones, or taking this monolith, which largely is organized by technologies, front end, application, back end, not around business domains, right? So you want to break them up into microservices that are kind of business driven. So why would I wanna do this? I'm gonna take this monolith and I got this one thing that works great. I'm gonna break into a lot of little things and make a headache for myself. Well, what you wanna do, your primary goal should be, I got 50 programmers working on this monolith and I wanna have 10 teams of five programmers working much faster around business domains. So how can I figure out what are the business domains in the monolith that I can wrap a small team around? They could be autonomous and they could move quickly, right? That's really the thing is moving faster, right? Moving in market faster. So if you look at the state of the art today any of the tools that will help you turn your monolith into microservices and they look, they scan the code. They scan the code, they find all these connections, you get some graph like this, you got a whole bunch of little things they're all connected, lots of lines, trying to figure out, what is the best way to slice between them? Usually they're looking at, how often do they call each other, things like that to understand where is the place the partition. But just the dependency between the clusters isn't enough. We need to understand where are the big gas giants that are lurking in your application? What are those objects that everything gravitates to? Because those are probably the center of a microservice, right? So we wanna understand these heavy objects we call the data gravity insights. We wanna understand how do we find these heavy objects that may be the center point of a microservice and then all these other things are kind of orbiting around those things. So we take a little different approach. Like what's the most important thing to the customer? The data that they persist, it was important enough that they persisted it in a database. Hello, the data is kind of important. You can't just look at the code. So we took the approach of yeah, the code graph, application call graph, important stuff. What about the schema? What about the relationships between the schema? And then you take the third leg of that and say, what about the transactions between the code and the data? All of that has to be taken into account. So data gravity insights is looking at a holistic approach, right? Look at the code, look at the data. How is the code accessing the data? When is it accessing it, right? So you wanna understand and get a holistic view of your application and how it's put together. So if I look at the call graph, right? This is from the famous day trader, right? But I've got account data beans and quote data beans and market summary beans and stock beans, all sorts of beans, lots of beans in here. So, and there's call graph between them. Then I look at the schema. Nobody's looking at the schema. I look at the schema. I've got an account table and a quote table and there are some foreign keys between maybe the holding table and the quote table. So now I've got a different view of the application where I can see foreign key relationships. I can see what tables have other foreign keys into other tables. That's a whole bunch of relationships in the domain, right? If you wanna understand the business domain, look at the schema, because usually DBAs do a pretty good job of ignoring technology, which is, you know, front and back end stuff and they're just dealing in the business domain. So you look at the schema. Then you overlay these views on top of each other and now you can see, hey, I've got some calls being made at the code level that aren't represented in the schema. I've got some things done in the schema that maybe aren't represented in the code. And so I can see those paths, but I also wanna find those gas giants. I wanna find those heavy objects and then say, these look like the center of a microservice. And as I look at this partitioning, I can see here are my APIs. All those red lines, these are the guys that are gonna call each other our cross partitions. And so this is how I have to build my API. The problem is that's just a 2D view of the world, like an X-ray, right? X-ray's fine. I can see broken bones and stuff, but I don't know what's going on behind all that white stuff. And that's myopic view, I think, of just that 2D flat plane. What we need is an MRI. I need to be able to take the code and turn it around and look under it and see, you know, pull these things apart, see who's really talking to who, right? Different filters, different ways of looking at the code. Extremely, extremely important to understanding all the different relationships in the code. So what I wanna do is kind of tilt that view and look under it and be able to see how those relationships are coming together. And we don't have this view yet, don't get too excited. We want you to help us build this view, but we think we have all the underpinnings, right? We've got all the stuff inside that we need to go build views like this. And we started to build them using some tools like Bloom and Rahul is gonna show you that. So what we're trying to do is get this holistic view of the application, the data, the source code, the transactions between them to understand who's talking to who, you know, when do I have to partition these things and how should I partition? So just to go through some of the possibilities and then we'll go into the technical stuff. So clearly queries to run and understand the dependencies, those are things that we've already built. Triangling the database, the code, dynamic calls and all that, very important. Right now we're just taking a static view of the world. It'd be nice to add a dynamic view of a call, you know, watch the application run because it's important to understand if this code calls this other code, well, does it call it once it's startup? Does it call it a thousand times a second? That's a really different relationship. So it's important to understand dynamic and then find the data centrality and the code centrality. And so these are the important things in the code. These are the important objects in the data. How do they relate to each other? So can I find classes that are accessing the data outside of that centrality? Now I got distributed transactions and what do I do about those? Do I refactor my data, do I refactor my code or do I create a distributed transaction or do something like a saga pattern? So very important to understand. Then can I find these anchor classes, these entry points? You know, if you view this graph with all these bubbles around it and you say, hey, this is really important object. Look, everybody's pointing to it. Then you find out it's a servlet. It's the entry point to the system. Of course everybody has to come through it but it's not an important business object. It's just a router, it's just a traffic comment. But then can we annotate the class and say, okay, this one is an entry point. So we do some annotation on the classes which we don't have yet, which we hope to add, right? Hopefully with the community. And then what about the framework being used? So I know a little bit about the framework. I'm using Spring Boot and what am I using? I'm using some model view controller. Now I can say, okay, can I label the classes? These are model classes. These are view classes. These are controller classes. That's gotta be important information when you're trying to figure out how to refactor this application. And then identifying things like utility classes. Again, I've got this one class, everybody points to it. It's like, yeah, it's the log class. No, no, it's not the most important thing in the system. It's the least important thing in this. It's a utility class. You just copy it into all the microservices. But it's important to understand that and we've done some work to identify utility classes and say, okay, take all those little utility classes, get them out of my view. They're just clouding up the view. I wanna see the business objects. So what can you come up with? I mean, this is what we really wanna do here today with a meetup is say, we wanna show you what we've done and say, come help us build more of this. We've got some foundational work done, but there's lots of possibilities and we're hoping that you guys can help us create those possibilities. So with that, I'm gonna turn it over to Raul. He is gonna do a technical deep dive. He's gonna give you the theory behind it and he's gonna do some hands-on demonstrations of what we have today in our conveyor repo for DGI. So Raul, wanna take it over? All right, thanks, J.R. So we'll do a quick deep dive. I've broken this down into two parts. We look at data gravity insights, a little closer into what it's comprised of, how we build the graph and how we can visualize some of the use cases J.R. mentioned. And then we look at cargo, an approach that we built on top of DGI to partition monolithic applications into potential microservice recommendations. So DGI comprises, this is the overview of DGI, right? So we start with the source code and we package it into one of many formats and then we extract three key relationships from the application. These are code-to-graph relationships, schema and transactional relationships. Once we have these, we persist them in a graph database and this permits us to use query languages like Cypher to look for interesting insights that we can get. So code-to-graph understands the static dependencies between the various methods, the instructions, the classes that we have in the application. These dependencies we've categorized into call return dependencies, data flow dependencies and heap allocation and their corresponding dependencies. In addition to that, we have schema-to-graph which looks at specifically the relationship between the database tables and the columns in the database. And these are a few examples could include foreign key relationships and others. And finally, we have transaction-to-graph which looks at transactional crowd operations between the source code and the database tables. These could be via transactional reads and writes and so on. And we wanted to populate the graph with this information to complete the view. So what does this give us, right? So this enables us to analyze the source code dependencies. So we know which classes talk to each other classes, where the utility classes are and which classes have a lot of traffic and so on. In addition, it gives us code-to-database dependencies and this tells us how the source code interacts with external resources or persistent databases. In addition to this, we have database-to-database dependencies which allows us to look at how the various tables in different databases communicate with one another and what relationship they have. And finally, we would like to think of this as a continuous modernization approach where we look at runtime statistics and operational traces and telemetry from tools like Yeager and Instana. So the question here is what can we do with this data? Here are some examples that we can build. These include transactional scopes, looking at various data synchronization issues and inspecting call and control dependencies. In addition, this allows us to look for potential restful service transformation. We can identify opportunities for code and data refactor and maintenance, identify distributed transactions and come up with remediation strategies to handle these distributed transactions as well as other synchronization issues across services. So how does DJI work? So I wanna do a quick demo of how we could interact or how we would interact with DJI and how we can inspect the graph that we've built. We have a getting started guide on the Convoyer repository page that gives you detailed instructions on how to start using DJI for your application. It's available as a PIP package. So all you'll need to do is install DJI using PIP and then the rest of the instructions are here. They're pretty detailed. I'll just go over the commands themselves and what they do. So once you install the PIP package, the command line tool is DJI and I start with DJI help and this should give us an overview of what our tool contains. So there are a few options that allows us to interact with the graph database as well as some command line options like verbosity and another information. But the key component of DJI are a set of commands that helps us build this graph. Here are a few, there is CTG which stands for code to graph and this allows us to add the call return dependencies, keep dependencies and other things to the graph. We have a skip over partition for now. We have schema to graph or S2G which passes the SQL schema potentially through a DDL file into the graph and transaction to graph or DX2G which adds edges that denote the crowd operations in the graph. And finally we have partitions and I'll do another deep dive in the next part of this talk about what this is but on a very high level, partition is a command that runs this algorithm that I'll discuss called Cargo which enables us to identify potential partitioning strategies in the DJI graph. So to use DJI, once we've followed the getting started page and we have an application we can call one of these sub commands. I'm just gonna show one example and this is code to graph and the help here should provide more in details on what it does. But essentially code to graph takes a directory that contains a lot of data that we've mined from the application. You can provide an abstraction level depending on what abstraction we wanna look at. This could be class method or full which includes class method and instruction. And once we do that, let me take a while but it's going to go through the data, the program and start populating the Neo4j graph with a lot of dependencies. So right now it's doing heap carry dependencies and this is gonna take a while because there are thousands of relationships to populate. So what I've done for the sake of this demo is I have a running example after running code to graph in DJI and I'll show you how we can interact with it. So this is Neo4j desktop. There is a graph databases that's running underneath which has all the relationships that we're populating and there are a couple of ways to interact with it. And today I'll talk, I'll walk you through blue. There's also the browser which we can use to interact and run some queries. So Bloom is a graphical user interface that comes with the Neo4j desktop. And this is what it looks like. This is a very high level overview. We can think of the data that we have in DJI in terms of perspectives. There is a class perspective which looks at all the code dependencies and there is a database perspective which looks at the class dependencies as well as the SQL table and the dependencies between the databases. So we can look into this one. So we have two types of nodes, the class node and the table node and a number of relationships between all these nodes like call return dependencies and foreign key relationships and so on. In addition to this, we have a set of queries that we've created and these are just starter queries. As the use cases evolve, we can write more complex queries. As an example, here is a query that we can use to identify data centrality. And the search bar allows you to run in the queries and we can look for data centrality and this should populate the graph that we see here with a number of relationships between the SQL table nodes that are shown in blue and the class nodes which are in gray. A bloom also allows us to add conditional rules to visualize these. So if you look at any of these databases, for example, code EGB, there should be a centrality score that indicates how central that entity is to the program. So higher value indicates it's more important and the lower values indicate that it's slightly less important. And there are rules that we can use to differentiate between the most important and the least important class. And in this view, we have an example where there is the database, the larger ones are more central, the gas giants analogy, if you will, and the smaller nodes are less central to the application. And the edges between each of these indicate the transactional relationships in this view. Bloom allows us to dismiss other nodes and inspect only a few nodes if we used to do so. And each database has a set of properties associated with it and so does every class. So for example, there is a centrality measure, this tells us the signature of the class, as well as if the class is a bean, if it's an entry point, if it's a servlet and other things. And each relationship indicates the nature of the transaction. So this is a transactional read. So the class reads from the code EJB table. It tells us the method that initiates this transaction read, as well as the action that initiated this. So this is just a quick overview of some of the options of DJI. In addition to looking at these, we can also inspect individual classes. And to do that, I can take one example over here. This shows how the call return dependencies exist between classes. So while this runs, let me go back to the slides and discuss how we can use DJI for some use cases. So to use DJI, in addition to looking at data centrality and other factors, we can also use it to identify potential refactoring strategies. One such example is to identify strategies to decompose monolithic application into a set of microservices. And to do that, we use DJI and built an algorithm called Cargo, which was presented in a conference quite recently. And Cargo attempts to take the DJI graph and identify a microservice boundaries like we see here. And this is the overview of the approach. I'll go into details on what each of these steps are. But in essence, we start with the DJI graph, which is the first step. And next, we identify snapshots and I'll talk about what these are. And we apply the algorithm called context sensitive label propagation, which comes along with DJI to identify these microservice boundaries. So the first step is to build a program dependency graph. And this is the graph that we have in DJI and this is just a technical terminology for that. We build what is known as a context sensitive program dependency graph. So if you look at DJI and every node, it has a context associated with it. Now, what a context is, is it emulates dynamic interactions in the program because we do a static analysis, we really don't have runtime information and context sensitivity is a way to impart that runtime information into the program. And without context sensitivity, we might miss some key interactions that might only appear at runtime and not at static compile time. To give you a quick example of what this means, we have a quick example here with a, it's more of a pseudo code with a few classes and interactions. We have two objects of type A as shown here and both of these objects call a.foo and b.bar in the other classes. So what I'm gonna do is I'm gonna run through this program and on the left, you'll see a context insensitive graph that's being built and on the right, we'll build a context sensitive graph. And by the end of this quick run through, we'll see the difference between context insensitive analysis and a context sensitive analysis. So the first step is we allocate an object, A1 and it calls a.foo. Now in a context insensitive graph, there is a call graph edge between main and a.foo but on the right, in a context sensitive analysis, it not only indicates that there is a call edge, but it also indicates which receiver object is instantiating that call edge. As we walk through the program, we'll see that A.foo's initiate is called twice from two receiver objects, A1 and A2. In a context insensitive analysis, this relationship is missed. And as we walk through the program, this becomes more of a problem in context insensitive analysis where we miss many, many more relationships than there actually are. But on the right, we'll see that context sensitive analysis includes all the relationships between two methods and it also highlights which receiver object instantiated the call. By isolating these context snapshots, we can look closely into different dynamic states of the program. Here is the graph again, for example. It's important to note that this graph, although complete, is all possible dynamic states of the program. But any given time in a single threaded application, we can only be in one state. So A.foo can either be called by A1 or A2, but not by both simultaneously. Now to distinguish this fact, we extract snapshots. A snapshot is a small example of a dynamic state of a program which we can derive from the context insensitive graph. So in this example, this is a call trace when the receiver object is A1 and the second snapshot is the call trace when the receiver object is A2 and so on. So for every receiver object in our call graph, we get a small subgraph that indicates the dynamic state of that program. Along the same lines, we can also extract snapshots that have to do with database transactions. Since the DJI graph has transactional relationships, we can extract subgraphs from the DJI graph, which indicate interactions between the database tables and the classes in the program. And once we do this, we have a set of discrete snapshots, which we can then use to apply this algorithm called label propagation, which tries to identify communities in the graph. So label propagation works with a set of initial assignments and then it tries to propagate those assignments through the entire graph to identify partitions in the graph. So these initial assignments can be random, in which case it would be completely unsupervised, but they can also be user preferred assignments if there are any specific preferences on grouping all the classes that handle the web interface together, as well as database interactions, those can be used as initial assignments. Or we could also use other partitioning algorithms and use them as an initial assignment to run label propagation. Essentially, what label propagation does is once we have an initial label, each node gets the label of its neighbors in a greedy manner. And this process is repeated until convergence, that is there are no more changes to the coloring of the nodes and that indicates the termination of label propagation. So in our approach, cargo, which comes with DJI, we apply label propagation to each of the snapshots that I just discussed. So as an example, we would initialize labels and let's assume that this is our DJI graph. We would start by looking at the transactional snapshot and perform label propagation on the transactions. And what happens now in this view is all the classes that either read from or write to a database table get grouped together with that database table. And in essence, this enforces a sort of a database-per-service pattern. And once we have the labels for the database interactions, we then run context label propagation on each context snapshot. So in this example, you would propagate the labels for this snapshot. And likewise, we can do this for the other snapshot until we've propagated the labels to the entire program. And once cargo terminates, we would have partition assignments for every class and the database table in the program. So that's the overview. We've packaged cargo as part of DJI. It's also available as a standalone tool. And it has a lot of options for enhancing how the label propagation behaves, soliciting user feedback to initialize the label propagation and so on. So I want to go over the evaluation, right? Just to kind of complete the thought process on how cargo works and how it performs compared to some other algorithms. So we looked at a few applications as shown here. They belong to several Java frameworks. They have a number of classes. These are toy examples. So there are just a few hundred classes in many cases. And a few secret tables. We also looked at some additional approaches that are available in scientific literature like Mono to Micro and a few others. And we used these algorithms along with cargo to see if running DJI and cargo can enhance the partitioning recommendations of these. And when we do that in our experiments, we use a notation plus, plus for brevity. We looked at a few research questions here to see if this technique works. We evaluated how effective it is in remediating distributed transactions. We looked at the latency and throughput improvements that we might get when we deployed these as running microservices. And we also looked at the partitioning quality and architectural metrics that we might obtain if we were to partition the monolith using cargo. The first question was looking at distributed transactions. So we wanted to minimize distributed transactions and to do this, to the extent possible, we want each database table to be accessed by just one microservice partition. And to measure that, there is a measure called transaction purity, which measures how pure transactions are. If the transaction purity is low, that means that a table is accessed by multiple microservices, potentially leading to needing a distributed transaction management. If the transaction purity is high, it means that a table is accessed by only one microservice and all the data access remains local to that microservice. And this is just a quick comparison of all the techniques. I'd like to note here that plus, plus indicates that we used cargo on the partitioning assignments that were given to us by the other algorithms. And we observed that in most cases, without using cargo, the transaction purity was quite low, which meant that if we were to implement the partitioning as per these algorithms, we would have to reconcile with a lot of distributed transactions. But using cargo to refine these partitions considerably reduced the incidence of distributed transactions. While it didn't fully eliminate them, it made them much fewer in numbers so that it's easy to handle. And finally, just running cargo without any seed examples in a random manner also achieved a transaction purity of one, meaning it could partition the application in a manner such that all the tables were local to the partitions. In addition to just looking at transactions, we deployed two versions of the applications as microservices. The first one was the original partitioning algorithm with a technique called MonoToMicro. And the second one, we used cargo to refine these partitions and to kind of look at if we can get improved latency and higher throughput. And we ran these on various loads with ranging from 2,000 to a million users on a number of use cases. And the key takeaway here is that in all cases, using cargo and DJI to do this refactoring improved the latency or reduced it by 11% and increase the throughput by about 120% which was quite considerable in our use case. And finally, we have to talk about cohesion and coupling which we use to evaluate the architectural quality of these partitions. We measured some of these metrics and we observed that again using cargo reduced the coupling and increased cohesion of the applications compared to the state of the art techniques. There are some examples where we think cargo could do better. One example is business context purity which measures how closely tied each partition is to a business use case. Now, since cargo does not at its current state use any business context, it didn't really do well at creating partitions that stuck to a specific domain. And we think with some additional work and by engaging the community, we can make the partitions from cargo more aligned with the domains that they tackle. All right, so this is a quick summary of cargo and all I spoke about. I wanna do a quick demo and just show you how we can use cargo from DJI. So cargo is available as a standalone PIPI package. And it's used as one of the dependencies in DJI. So when you install DJI using PIPI, it should come in pre-built with cargo, but there is a standalone tool in case there are options to enhance some of the partitioning functionalities in cargo. I'm gonna clear the screen here. And to use cargo, it comes as a sub-command of DJI and that is DJI partition. And I'm just gonna ask for help here so we can see how we would invoke it from the command line. So DJI partition has a few options. The seed input, it's optional, but if we do provide it, it consumes the user-designed seed partitions. So if you have some preferences on classes belonging to a specific microservice, this is the place to provide it. It doesn't have to be exhaustive and it does not have to cover all the classes. Any recommendations or preferences can be provided and the partitioning algorithm will try to respect those initial partitions. And along with that, we have other options like maximum partition size. If there is a preference on having just three or four microservices, for example, that could be provided as an option, but this is also optional. So if you don't provide any number, Cargo will interpret a seed partition size and it'll use that internally. To use Cargo, we just call Cargo with one of these options. So I'm just gonna call it with a partition size of five. And once you do that, so this is gonna take a few minutes, but I'll just walk you through what is happening underneath. Cargo is looking at the DJI graph that we showed and it's gonna make a local copy of it because we didn't want to make it tied to any specific graph database or technology. So it's gonna make a local copy, run the partitioning algorithm as I described, find that the partitions for every class and then update the DJI graph with a new property for every node indicating the partition. So I'm gonna go back to this view and we ran Cargo once and I'm just gonna show you what it might look like. So these are all the classes in the application or a set of classes that we can visualize. And right now they're all gray, but if you look at any one of these classes, market summary being, for example, it should have a partition ID. Likewise, another class would have another partition ID and these partitions were obtained by running Cargo. To visualize it better, we have some rules here that we can use. I'm just gonna apply a unique color to every partition and this view gives us an example of all the classes in the application and each color represents classes that belong to that specific partition. An obvious question here is how could this be useful apart from visualizing classes in different partitions? Right, one thing DJI can help with is to visualize distributed transactions. So even after we do Cargo, there are cases where we'll have distributed transactions and it is important to remediate them. So by running the distributed transactions command, we have some Cypher queries we use to compute distributed transactions. It should populate a graph that contains tables, classes and the distributed transactions as we see here. The larger blocks here indicate components that are more central and you'll observe here that there are classes that are colored differently indicating that they belong to different microservices. So we see at least three, four microservices here, the yellow, lavender and orange and they all talk to certain databases. As an example, we can pick a set of classes to see what interactions they have. And this is a quick example of the code EJB table having transaction rights from two different classes, one from a Ping EJB class and another from a Servlet class. And they're both reading from the code EJB table. And if you look at the property of every transaction read, we have a unique transaction ID and in cases where the transaction ID is the same, for this example, the transactions would be potentially distributed because they're both part of the same global transaction that are reading from the data piece. So that's a quick example of what we can do with cargo and DJI and visualize the various interactions and distributed transactions. That brings me to the end of my talk. I wanna hand it back to John who will talk you through some additional use cases that we have in mind. What do you John? Yeah, thanks Rahul. So if we could just bring up my slide. Thank you. So future work. So this is where you come in, right? One of the things that we couldn't do. Well, if you look at the output of DJI which we didn't show you, it's like just a JSON file or forget it's a plain text file but it's nothing to look at. So the idea is, could we create some reports that an architect could go back, right? Or a software engineer could go back and say, this is the output and these are the recommendations for artisans and whatnot. So there's some reporting that we wanna add to it. As I mentioned, we wanna have dynamic operational data. So doing some dynamic scanning, traces through the program as it's running, right? And add that to the graph. Again, once again to understand, yeah, okay, this is calling that but is it calling it once at the beginning or a thousand times a second. New languages. Right now, DJI only works with Java but Java is not the center of the universe. So there's lots of Java out there but Python and Go are becoming very, very popular for microservices. Could we use other languages and especially in C-Sharp there's lots of Windows stuff out there and whatnot. Enhancing the support for the Java frameworks that we have, right? Spring Boot and other frameworks, right? Adding more frameworks that we understand. Remember I talked about the Model View Controller and by understanding the framework can we inference what these classes are being used for. Then support for distributed transactions and being able to generate code, being able to generate code that uses Saga patterns, right? So in other words, great, you give the architect this report now what do you go do, right? Now it's the exercise for the student. So we would like to be able to generate code, generate stubs and take care of distributed transactions. We need a UI for visualization. It's great using Bloom. It got us pretty far but we would love to have someone who understands human-computer interaction really build that 3D view where we can turn things around and look behind them and look under them and see what's going on. So it's kind of screaming for a really cool visualization that we need to build. And then we're using Diva which is another conveyor project and it has a set of persistence frameworks that it supports and there's always more persistence frameworks. So we're looking at enhancing the persistence frameworks in Diva, whether we do them as part of Diva or we do them as a set of adapters either here or there love to have the community's input on what you think is the best way to do that but enhancing the frameworks, the persistence frameworks that we support so that we can understand the distributed transactions going on. We are currently enhancing schema to graph looking at triggers, right? So it's great to understand here's a schema, here's a relationship and then what about all those triggers that when this gets updated that automatically it's updated and then the application doesn't know what's going on. What about stored procedures? There's lots of stuff with stored procedures out there and so could we use the information from the stored procedures to understand again what when this is being updated is something else being updated, what's happening? And then what can you think of for future work? Open an issue, let us know what you think if there's other ideas that you have we would love for you to join us and help us build this. And so the point to the GitHub repository is down there at the bottom we're using actually using a GitHub project so we got a Kanban board we got stories on the Kanban board but we would love for the community to come help us. We think we got it to a point where you can kind of visualize the potential that's here but we need your help. We need more hands on to under and people who have talent in other areas not just Java but C sharp and whatnot and visualization and but we need your help to make this thing as cool as we possibly can, right? To be really useful. And there's never gonna be a tool where you push the button and it makes microservices. You're always gonna need an architect who's guiding it along the way. So I totally believe that the tool needs to assist the architect in making architectural decisions. Give them all the information they need to make those decisions show them different ways of viewing their application but at the end of the day I would not hire an insurance architect to rearchitect my banking application, right? I want someone to stand as a banking industry. So you need to have that context. And so we vision this as a tool that is gonna assist the software engineer, the architect who's gonna rearchitect or redesign this application and help them understand where those big heavy objects are and where the microservices are and where the business domains should be. So please come help us. I'm pleading with you but we'd love to have you join the team, join the community and help us make this into something great. So Jonathan, back to you. That was my plea. Awesome. Thank you, John. Thank you, Rahul. Such an awesome demo and show. So for anyone, if you have any questions feel free to put it in the chat right now. While we have John and Rahul here we can get them to answer a few. And in case you don't have any questions now but you may, so later whenever you have whenever you start getting trying to use the tool I put the link to the conveyor slack channel in the comments and you can see on the screen now it's just a conveyor channel on the Kubernetes Slack. So feel free to jot any questions you have there and we'll get someone to help you with that. Yeah, we'd love it. Let me see. And at the moment I don't see any questions but that may just be people are typing. So I'll give it a few minutes. It's a lot to absorb. Yeah, it is. All right, so from Marcus Nagel have you planned any DGI specific meetings to hammer out task and sync? So that's a great question. So yeah, I think it's time to do that. So we've been having internal meetings but now that we've announced it to the community I agree it's time to have a weekly community meeting or maybe a semi-weekly community meeting where we're discussing these and having our scrum calls, so to speak. So yes, we will post that in the read me in our DGI repo but yes, it's time to have community meetings now. So we will start those up, absolutely. And hopefully you'll join us. We won't just be the same people and I want a community meeting. Thank you, John. Anyone else have any questions? All right, well, with that we're gonna call it a show and John Rahul, thank you again so much and people will be pinging you in Slack if once they get to using it. Yeah, thanks for having us and thanks for listening to me one. And yeah, hit us up on Slack and interact with us because we do want to start building that community with you, so thanks. All right everyone, we'll see you next time. Thanks again for attending, bye. Bye-bye.