 Okay, good morning everyone. Thank you for coming early, because we're going to be back. I'm Austin and I'm the best at this major, and I'm doing the study. I'm an environmental engineer. I'm a private environmental worker of India, and I work on doing the best at this and what I do as a student. So, I will talk about the study and the study too, mainly, but before that I want to do a quick interview with the best at this, for anyone new to my studies. And finally, I promise I'll probably do an imaginary demo of what it is right now. Before that, we have a quick interview with the best at this. Just to give you an idea of what the study does. And finally, I try to do a quick demo of the study to its current state. So, how many of you here actually know what the study is? But in any case, I'm just going to give a very quick introduction. Last year, I was at this university with no central metadata server where you don't have a single point of failure. It has features for redundancy like defecation, and others. It gives you the features of a pretty good student system, enterprise student system where you have to learn and more features to make it pretty good. Last year, I was at this university, this means that you can run any of your apps that you actually run on your novel. Back in file systems like XFS or XC4, other stuff is without modifications. You would probably want to do some modifications to try some other different file systems. It's really flexible. We have a translator model of implementing the file system features. But in each file system, we feature it afterwards. So, we have distribution mechanisms. We have applications. We have variation coding. All of these are implemented as small bits of modules because of translators. So, we stack translators to get a less surface volume in the features we want. And you can access less surface in many ways. We have native on-protocol, less surface protocol. And we also have integrations with a lot of other projects like Samba and NListanetia, so that you can get Windows access as well as NList access to less surface volumes. And less surface doesn't require any special hardware to run. You can run it on any PC. You have Array 1.0, which is like Raspberry Pi. Those ones are responsible for computers. And just to help everyone understand some of the, I mean, some of the less surface terms that we're using. So, first up, we have the VR and NOLA server. This is basically just the computer which you have less surface server packages installed. The number of these less surface servers together form a less surface storage pool. On these servers, we have Bricks. Bricks is basically an empty directory or a disk mounted to a path on a server, which less surface will explain less part of its volume. And volume is a logical collection of Bricks on which finally you store the data of the less surface volume. And how we store it depends on the kind of volume you create. And a less surface client does any process that directly talks to less surface Bricks using the less surface protocol. Let go protocol. Yeah, and I already mentioned translators. So this is a general less surface simple less surface center where you have servers on the bottom. These are all less surface servers that less surface is installed. You understand that as to them, those are Bricks. And now, we've got a native views mount, native views client, which finally speaks to the servers. And then we've got a lot of other services which also talk to less surface via gfab findings. And let's see how we create a less surface volume. It's very simple. If you want to actually go ahead and use it. You install less surface, server packages everywhere on the systems you want to actually from which you want to create less surface volume. You do peer groups to set up your less surface storage pool. Peer groups are as in you just you just tell the server that this is the number set of servers that you want to be using. Once you have your less surface storage pool, create the volume. You'll have to actually, because they have done mounting your attributes. Whenever required. Yeah. This is going to create the volume. You start the volume. And you're ready to use it. So this is basically four steps. You can do it. Why I have this part. It's a really bad item. But yeah. So our volume as I said before is a collection of Bricks and different servers. Our volumes. We defines it as server or. Translator gaps. So we stack translators on both the server side and the client side. So we have a client side graph and we have a server side print graph. So all the translators create a graph out of them based on the volume. And then. Start processes. Damage. Basically. Using these graphs. And. Those elements behave as the graph specifies. So. On the client side. Most of our logic is on the client side. The server side. Basically dumb. It won't have a proper logic there. All our distribution. The logic is in the discrete class. Another happens on the client side. You have a ticket. Again happens on the client side. And. You actually have a pre or a pre of. A pre not a simple graph like this. Where from distributed branch of multiple branches. And from that you get your game branch of. To get your features. To get your keys and distribution. And. Other . They also present different. Other. On the client side graph. And. The brightness justice. Right now. This has some features. To help. Temperature. Half a six day. The rules. What is the stress reading. The stress reading is the distributed management data. For best we have. This is what. LASTER E is the distributed management daemon for LASTER EFS. This is what makes LASTER EFS really easy to use because it takes care of creating all those graphs. It takes care of starting all these processes every time and makes it easy. Before LASTER E was present, LASTER EFS users would have to write their own volume graphs on volume files and you would have to distribute the files on your own and you would have to start processes on your own. But LASTER E, LASTER E you just have to write a simple command and LASTER E does everything. LASTER E runs on the notes of the LASTER E service pool and then manages the LASTER E service pool so it helps you to grow a string. So you can do peer probes to expand your LASTER E service pool and you can do peer attaches to strings of LASTER E. LASTER E manages volumes. By managing volumes it creates those graphs. So we have these volume graphs that I talked about. It starts those processes because elastic or dynamic expansion and stringage of LASTER E service volumes so you can do add links and remove links. And once you do expand LASTER E, we need to rebalance data so LASTER E manages the rebalance process as an attribute to the rebalance. But it manages all the demons that provide LASTER E's features. It gives the clients the wall files. You do not have to copy a wall file from your server and put it on the client. So LASTER E's client request LASTER E for a wall file. It will get the wall file and it will mount. And it does a lot of LASTER E stuff. LASTER E does a lot of LASTER E stuff. Okay, so now we are replacing LASTER E. Yeah, trying to redo LASTER E. And why we want to do it is because LASTER E has this right house pretty bad. It's a big monolithic piece of code unlike the rest of LASTER E's. So LASTER E is a single monolith and if you want to actually get any file system feature into the polygraph that we talked about, you'll have to mess with LASTER E to ensure that it generates graph with that particular LASTER E. And doing that, you'll have to make sure that it generates graph with that particular LASTER E. And doing that is pretty hard because it's really complex. I don't understand most of the work myself. So yes, and someone coming in new will find it much tougher. And there are some problems with architecture we have as well. LASTER E creates a mesh network between each LASTER E in the pool. So we got an n cross n square connections in the network and we have a concept of equal peers for LASTER E that as in, we want to be able to any of the LASTER commands from any of the nodes in the pool. So we don't have a single needed there. So we try to do a lot of things to keep our store as slower as our config database. It's basically back at LASTER E if you have this LASTER E. And whatever is under that is our store. We try to keep this parallel LASTER E in sync across the LASTER E. And we do a lot of things to do that, which are complex and which meet the things like the network alerts in certain places. And it makes LASTER E very hard to scale as we go through a large number of nodes. And we want to fix this. Large as in hundreds. So we do good in tens of nodes. Once you go large, we have n square connections. And every operation we do kind of reaches out to all the nodes in the LASTER E. Because we want to keep our store in sync. So let's start with LASTER E. What's LASTER E cool? Or LASTER E next? Because some, I got comments this way from a person saying that he was confused about what is LASTER E cool? A new version of what? No, it's not. It's the second attempt to do it. I just call it 2.0 giving it sort of a version feel because it sounds nice. That's it. It's going to be the next management statement from LASTER E as 4 out of whenever it comes out. Yeah, it's the new implementation of GV where we attempt to solve other problems we have. Or if it's trying to, it's written in code because it helps us concentrate more on a management problem rather than worrying about problems of C. It's been currently developed in a separate repository. Under the LASTER organization, it will finally move into the global service source. But it's still going fast and changing the route so we have it in a separate. We are doing preview releases almost every month now. So we have the latest release project actually. So that's it. So let's go on to what we are doing with LASTER E. So tightly, we're trying to build out the core of LASTER E 2. This is all the basic frameworks that we need to do our operations after LASTER E. So we have things called transactions. Transactions are not in your sense of a database transaction or a transaction. This is a pretty safe. You would more likely call it an orchestration framework if you wanted to do operations after LASTER E. We want to do a content-based architecture to keep LASTER E modular to help people actually implement features easily. And then we'll be implementing basic volume management and cluster management easily. So it's just basic creation of volumes, additional volumes, expansion. Expansion to your question. It's like about five basic commands easily. We want to come up with the usual release in the next three months. And just figure out this basic commands are going to help us figure out how we have to design all the core features. All the other features like managing a lot of, we manage a lot of other services. We have books. We have tablets and a lot of other things. So these things will come later. Yes. Now let's move on to what we have actually done. So I talked about the store, right? We had a distributed equal store earlier where there is a lot of things to do that one particular directory is saying everywhere. But right now, the same store or GD2, they're going to be using XED, like every other instance now, and move our store problem across to GD2, XED. We could basically use XED as a, if you value show where we show our config, and it's available to all members of the cluster. We do automatically set up of XED clusters. Definitely. So you don't have to set up an XED cluster by yourself. It's like you want to keep it as simple as XED right now, and then you install XED like this on the nodes, and you just go here and there. So we want to keep it as simple as XED right now, and then you just install XED like this on the nodes, and you will create a program that can create volumes. You don't have to learn and then you also create a type of XED. We handle that volume. We are currently in the XED within the XED group, so that you don't have to install XED separately. But if you want, you will be able to use an extended management scene. If you want. We want to do elastic XED management as in, but I'm going to have a very simple CD cluster that I've already done, that any cluster that comes up becomes an XED to server as in, and that doesn't really scale. So we want to have some elastic management there. Yeah, this is going to happen a little soon because I'm working on it right now. If you guys know about the popular idea, this MGMT package, keep that in mind, so I'm working with them to get that package out and make it reusable for everyone. And we have the orchestration framework, or the diagnostic framework that we like to call it. This runs actions across the cluster, so the actions would be things like, you want to create a volume, to create a volume, we do things like, we check whether these categories are available everywhere, where you want it right, so we check whether they are available, we check whether peers are wherever required, then we go on and create the run files everywhere, and then we do things like, launch the actual processes everywhere. So these are the things, we have the transaction framework handles. So in existing custody, we have seen different transaction frameworks, very complex, all of them are not flexible, and they are just very rigid, and they don't really help write new patterns for managing the stuff in custody. So the transaction framework, where we design the best flexible land, you will be able to do your own custom sheets of sets across the cluster. But again, change, as I mentioned, right now has some thoughts about the review of transactions, and so this is probably going to change before we finally, before we do our final release. We have a payment manager in custody, the current custody, we manage payments, we manage a lot of payments, and each payment is managed differently. Even though it's just starting a process with a wallet, we have each payment code for everything. So every new person wants to start a payment, comes into custody, writes a payment code, and it's a mess. So if we do want to change something there, it's free of charge. So we have a common payment manager package that ideally it would be as simple as writing a service file, and like we do a system heap, and we'll measure you just last time, service, last time, and it will just work. Yeah, this payment manager also helps us communicate with the payments because we need to communicate with the payments to do certain actions. We have to do things like how the list are, or how the payment is used to do natural payments. These are things, let's say, working on and trying to figure out how to do it. With GD2, we have our management data that's based on STP-based STP-based STP-based. So in GD1, we do a SANA PC-based. SANA PC-based STP-based STP-based STP-based. SANA PC-based STP-based STP-based STP-based STP-based. We use that, because it's a little bit, but this is not really who's been on other external projects who want to actually talk to the STP-based STP-based. So we are doing the more universal STP-based STP-based for all your management actions. So in the future, if you want to actually communicate with the customer, there's two actions. You won't need to run the business here, you know, so if you can't meet up with the customer, the action's done. So there's a lot of stuff to do. We have to actually define the enterprise out properly. We have to follow the specific conditions using one of these to start the opening thread. We also need to figure out how to do it. So we have GRPC in GB2. This is used for communication between the study groups where people are transactions, right? We talk about transactions. So to do that, we use GB2. GRPC for people who don't know it's a STP-based RPC protocol that uses Protobox protocol for interview. It uses KS protocol and that's good. We also want to do trucking synthesis. One of the positive repeats that we use for communication with truckers. We recently also committed SRPC. So, this is a bit of a mess right now, but it has got three different RPC mechanisms in GB2. We finally want to reduce it to probably one, and that would probably be RPC. But, yeah, so we put it into that SRPC to just allow ourselves to get to the one start and allow kinds of mountain columns to get rid of it. We tried to implement GRPC in the single case and that was much harder than doing SRPC than both. And we had to do that. We have the log-in unit. We have seven entrances, three gates, and it's a cluster cluster. So, if we were to place a particular operation for the cluster, it would go with an operation, right? So, without a cluster cluster, it would have to be a cluster flag, an operation of cluster cluster. Now, with a structure problem, the ads are in a cluster series. So, it would add a transaction, and it would be too many. Now, we did it as a result of the transaction. So, you put that into the transaction of cluster. We still have to improve our formatting a bit, but yeah, that's that. We still have a lot of stuff to do. Like I said, some of the stuff we have would be written in a unique way. We have stuff that's partially complete. We haven't done a lot of work and lots of things. We still need to write a lot of tests. We started out writing some minute tests, and we did a bad job there, and so, we are not going like we need to be static again. So, we have to do it. We also need to document everything that we do well. So, that's not how we have to do it yet. We're talking about something that we do a lot. But, when I talk about everything, it's not just plugins that I'm talking about, it's just, I want to, it's just, it's a feature that I want to have with all the code frameworks I've listed. So, like, we have a transaction framework that is just a little bit, so that's extensible. So, we can write a order. We don't have to depend on a particular order that we specify, like that. Yes, that's right. The name is Strada XS, we use those code frameworks without actually modifying the code that we use. And all of these things were documented. So, a lot of things require a lot of reading requests, so, we have class data, a lot of reading requests. But, how do you do that? Because, you need to get it inserted into the one interact We need to understand what options it has and to be able to set the class data options and other stuff. So, we need to be able to add new commands because when you get features, you get another feedback command or we need to be able to support that. So, as I mentioned, payment management should be pluggable. So, it shouldn't be rigid, it should be able to easily start and stop payments. As a result, we want to do some sort of an event that events from the Strada XS. The study management of this is required by some of the management UI projects they like to manage the Strada XS. So, you want to be able to have something sad and stop. That will help them a lot. Now, we also have actual plugins. We have started discussing two approaches with Go, one of the we have data dynamic plugin support that is, we have to investigate. But, we were just we wanted to do a sub-processed learning model where our plugins themselves have trial processes of the project. Now, let's come with it over GRDC to get stuff done. This is still in discussion. We have Walsh is one of the core parts of the project. So, generating our graphs is very important. Generating the graphs is also important. Doing this slightly hard in a loop that we have. So, this table is half of the industry. And, yeah, that's flexible. Change it. It's very hard. So, yeah, designing a new flexible Walsh where we want to do something like we have system we service clients. So, each class maker would provide a description file which says, okay, this is the graph. And, a Walsh package is where the translator needs to be. When it brings the graph, using the volume information that you provide. So, when you create a volume, you say, okay, these are the things I want. Let's do the Walsh package. We take this information, press the translator descriptions as a processing process. I'm working on it. It's still not going to actually present it. Yeah. We're also discussing events and books to allow things to happen dynamically. So, yeah, the events are the system of events which will help you like, try to understand this better. Books is we have books to actually write out, say, write out scripts at particular points of execution of library. You can do actions I say on a volume start on the script right now. Manages what we say. And it is the ratio of Samba configuration, right? So, the library provides tools to actually automatically set up an interface manager or Samba test when you have a problem. There are some problems with the current books mechanisms where they are problematic with large analysis so we have to solve them as well. Again, this is the planning. That's it, I guess. So, in addition to that, as I mentioned, these will need to have testing write a lot of tests to make sure we have action. So, questions? I do have it now but I don't have I have to know it. Any questions? So, right now, there is a piece there. So, the problem is okay. So, the question was what are we going to do about authentication between cluster and facility? That would be between clients I would have said as well. So, right now the strategy doesn't actually do a lot of authentication. We have various ways to know that that client is a client that you know but otherwise you don't do anything like that. Again, for client communications, we are I know using the same, but we still have to talk about that. If we do switch to JFPC, JFPC has its own identification. You can plug in any authentication models into JFPC and it will probably decide. We are switching to JFPC. Yeah. So, okay, the question was since the jfPC it would mean that the number of nodes we've got, the number of nodes we've got would increase. Yes. But we would send out authentication 212 or 202 if you want to. But even with cluster as it said, now we recommend at least the 202 center. So, cluster 202 you are going to probably have to do something close on. So, yeah. We're still working with single node 202 centers. We can do that. The 202 there. But it would be recommended with three nodes. Okay, so as I mentioned we are right now working on the 12 features for cluster as the facility. So, we will expect this to be done so, thank you. And we expect to complete everything to make sure that cluster as before we are actually going to have that stuff about everything. So, we want to get the user that we can start dialoguing against cluster instead of dialoguing on specific line of the features that are coming up. Okay, thanks, thank you.