 I have different things to make human in each hour of life. Yes. That's right. Yeah, you need to do this, right? You did it, you guys. How many people don't know how much they are? 10 or 16? I don't think so. Let's just start. Yeah. So, is there any back-end? Only good things, only good things. We were about to give you a take down human beings. Okay. Yeah. I think we have a show. So, we're going to try and make it big. Okay. I'll play incrementally on events. But I was almost told. Incrementally, incrementally is the same. It's the same. The point is to do this. To make it incrementally without increasing. So, let's just, I wasn't there. The mic was down. Yeah. Are we going to use it? Yeah. Are we going to use it? Yes. Walk about. Just about it. Oh. Woo-hoo. So, that's just a move on. That's pretty true. There's updates today, at least. But you might still want to maybe... This one is a move on. Daily or weekend, describe it as a move on. So that if there's ever a problem, that we don't use it at all. But I think there's two problem updates. What kind of information do you want to solve? What information? It will not be at the end for us. It will be used for all of you. So, the information will be at the end. Yeah. It's... So, information... We have it nowadays. But this year, it's... The person of the change, the person of the country fighting, and watch fighting, and... That's just... What do you say? Well, we're not used to it now. It's nice. You should enjoy data, but each time you adjust some status. Or each commit with your latest episode package. Yes. Don't you increase the size... No, no, no. Only for your package. Basically, package has one right, but... Yeah. So, we don't upgrade the tool database on each kilometer, because it doesn't make sense. I think it's... You don't need... Right. You need to... No, no, no. What do you... Okay. The... The code... Yeah. It's... ..that one. No... I think it's, this... No, no. Okay. The data, the data, the data, the data, the data, you're not going to go. No. So, I understand that if you don't believe that the work can be really good, it's not the same, because the American media is not willing to change it, and if you want to move, you will create something that will meet. I think we should point to what the writer says. We see that, but I don't hear us clearly. So maybe we should try to speak out, but I don't know if the audio makes sense. I understand that you don't believe that the work can be really good, because the American media is not willing to change it, and if you want to move, you will create something that will meet. I understand that you don't believe that the work can be really good, because the American media is not willing to change it, and if you want to move, you will create something that will meet. Hello? Hi, and if I was to compare to you, it's that somebody else can use it, and we don't use it, and the other thing is to use it really as our business terms. This is something I spend a lot of time on, because I was very interested. We had some phone calls, long phone calls, and I think the BuildStat database is much better to do software development. I think it's much better, because there is what I said some minutes ago. The database structure is easy to access from an object model development, whereas... Yeah, please, Ryan, tell us that you hear us. This is my opinion, but I think if we want to store information in UDD, it's much more easier to fetch them from a BuildStat database and push them in UDD. This won't break the current mechanism where we have four big update per day, because all the data will be already in another database. I think BuildStat is not far from being perfect, but this part, in my opinion, is quite correct. I think you should at last take a look on it. I took a look last year, but I really don't remember now where it has been a long time. If you want, we can have a look together just to see what can be done on the code base. I don't like the current interface of BuildStat, because I don't like it. It's poor quality code and so on, and I love PET on this side. I think we are lucky because both projects are strong in one side, and I think we are all together to merge. You were talking about the front of PET. The front of PET is very nice and very rich. There is a lot of information. The problem is that in PET, the code that does the presentation is just tiny in comparison, in fact. You could use it now if you wanted, because it just has all the data when it's called. It has all the data in a big hash, and it just formats it. It only does that. The data is ready to be processed. The problem is if you have all the data processed, which is done in the fetch data script. If you wanted to start using our interface in BuildStat, you could, just by taking this perl module and CTA, and just modifying where it's looking for the data. There is work to do. I agree, but I think the best is to speak together with the source code, and I will be able to show you what I think needs to be changed. But thanks to the hash table and things like that, I think this is not a big issue to plug both systems. We'll stop to speak about this topic. Hockey has been working on DAX, and Ryan was working on abstracting the specific repository, because we also wanted to support Git and whatever. We have one fan there, a fanboy. I haven't checked, but I would actually be surprised if there isn't. Perl module and CTA does the abstraction. We use it in PET as for inBuildStat in Perl. Needs abstraction, and there is a depth change and a depth commit in DevScript, which also has a complete list of abstractions for doing other, but still doing VGS stuff. The wheel is reinvented all over the place. We want to be able to track changes in the subversion variant. This means post-commit hooks, which trigger an update of the cache, which then fetches the changes from the subversion repository, so it's more like reading the correct changes. That hasn't happened for a long time. I know, but if there's a project that's already abstracting lots of stuff, I mean, some overlap there is, like Diff, I think. No, maybe not. That's something to investigate. It would be good to just join forces in this project and use it. The first step is to separate from the current code. Maybe the abstraction later is not that hard, but the problem is separating the current code nowadays. That's true. Diff commits what else? They are unpaired. I reasoned the DAX support for depth change. You're a good fan of DAX. I love you use DAX in SQL groups, so I have to add it to all the tools. Yeah, I'm still a bit confused about where are we going to store our data now. There's no decision, we're just talking about different possibilities and, of course, the conflicting views of Garnier and Lucas about storing one database or the other. I think we agree about that. I remember it was the conclusion we got when we had a phone discussion in September, maybe. I think you said some things like, OK, UDD must stay simple to access and to be easy to understand for people. Whereas, in my opinion, BuildStat might be more complex and target for development. I think it's two different databases. And feeling UDD from BuildStat is something trivial because you have the database and you have just to do a script to synchronize information. I think we have to discuss it longer. Yeah, sure. If the problem is, well, if you want to add a lot of data to UDD just for pets and which won't be useful for someone else, then it's a problem. If the data is going to be useful to everybody anyway and it's generic enough to be useful to everybody, then, yeah, it's perfectly fine to use UDD for that. For example, if you have a pyramid script where you just need to pass the changelog and the control file, for example, and that you can call to a script that you would call to UDD import that changelog and that new control file. And then there's a script processing the data and putting it to UDD. If only the information you need contains those two files, then it's probably fine to do that in UDD. No, I'm not sure of what you want exactly in... Well, nowadays, what we do is store more or less that. The biggest problem is watch files because each time a watch file changes, we have to refetch the information also. So we've updated the information about repository but also updated information about watch files because we need that information now, not waiting for the next crunch up. Because we want to know now if there is a new version of this one error in the watch file or whatever. That's why I'm also not sure about big centralized database like UDD, but maybe UDD can use our information for somebody else even if we don't use as our database. And actually watch files? That's the second and the secondary issue. If we can provide some information that someone else might use, that's nice and everything. But I think what we have to figure out is how we can have the information that we want to use. Well, for me, as I said before, I prefer things that are local, that are more contained, especially without daemons, depending on daemons, for this kind of tool, but it's my point of view only. We also have to discuss things like if we are going to do it, how to do it, the multi-repository stuff, which is quite complex, if we want to support archive-wide pet. For example, I would really like to have a pet for the whole archive. Yes, probably, or not? But I think that we need to be careful about not duplicating tools that already exist. For example, for watch files, there's DHS, which is reasonably well maintained. It's not useful at all, because the watch file that they use is from the archive. I would need the watch file from the SDN. I know, I know, I know. Have you talked to Raphael about that, because maybe he could just provide a way to provide the watch file from SDN, so like a special PHP page where you would post the watch file, and it would use, in addition to the watch file from Unstable and Experimental, to scan for new versions. For checking a watch file, it's just a 10-liner script. Yeah, but then you would do it for the whole archive, because the main problem I have with pet is that they sort of provide a service to a restricted part of Debian, instead of providing it to, instead of thinking more globally. Well, I did this. I tried to do some things more global. Actually, I stopped because of lack of time, but I think what I did is, can be used to provide a global view of the information. It's bad, it's too bad, because I should have brought with me some data by Chema or something like that to present more in detail what will start here. I'm thinking about having watch files from VCS on DEHS. Maybe it's, I don't think it's the easiest way to wait for pet to provide an export and then use the export, but since DEHS already has the access to the package description, that can fetch the VCS header and just fetch it itself. So this might be a solution where DEHS can get information, but can do it not always, but in most cases. Then you need to find a way, still need to find a way to ping DEHS when the watch file is updated in SVN. But yeah, that's true. I think that Raphael planned to work on that some time ago, but it's not done currently, but I really think that we shouldn't try to duplicate processes on generating data. In UDD, from the start, we decided that we try hard not to generate data locally when importing it into UDD. And we only do that for Ubuntu bugs because there's no other way to do it. I agree on that. There is some, but the biggest problem that we have is that PET, as SVN will start, are one of the few tools that are targeted at the repository level and not at the archive. DEHS, UDD, and many other tools, the PTS, work with the archive. So it's quite different. You have to think about different things and have different time constraints. And the information needs to be really fresh. There's no reason why the global tools shouldn't change too much our way. I mean, if our way is superior, it would be worth DEHS having this information as well. They should change. Because it's too complex for DEHS to adapt to us. It's already doing it for unstable and experimental separately. So it's a new matter of adding four columns in the database and writing the code to import the data from SVN and the checks. So the database is ugly, but it's possible. So does DEHS already use UDD as the base? No, no, the data adjusts DEHS as its own PostgreSQL database on alias, and data is exported from DEHS to UDD. What about... I'm just a bit off talking about packages.debian.org. Will they use UDD eventually? Or is it not planned to be used as a base for all these sites that all do the same thing at the moment? I think that the main goal for UDD is to provide new services which make use of the functionality that UDD provides by combining a lot of data. And the only goal is to re-implement every existing tool. It won't bring anything to Debian. So, yes, when I look at my to-do list for UDD, yes, the goal is to be able to replace DDPO or the PTS. But the goal is not to do that. The goal is just to be able to provide the same information. So that someone like Andreas can build a tool specific to a team and have all the same UDD information in UDD. We have little time left. Yeah. Yeah. We're stuck on the same topic, if not. You are working on the tool to present the information. Yeah. What tool is it? What happened? They're distracted on UDD. Okay. Well, we have less than 10 minutes. So, we should talk about things also in the short term. What we want to do now. We have this discussion last year. And it's talking there about the database and all that. We need to talk more about that. The other things are that would be really cool to have not now, but soon is to at least have support for Git, Dart or whatever. And in the not so distant future to have support for model and one repository at the same time. The thing is, how are we going to do it? Maybe Ryan from home wants to contribute. Yeah. In his head, how to write this multi-repository stuff but he hasn't written it down yet. Okay. So, not really useful for now. For the discussion. No, maybe in a few hours. Okay. Well, for the abstraction is more or less clear. It was started when the code was writing. It's just to separate the calls to SVN to extract calls like fetch me the last revision. Tell me what change, et cetera. Fetch me a file. Sorry. Yes. The call is cut. We're just in the library. No, we don't spawn the shell ever. There's no system call. Double, yeah. Double B complex. Nested hashes into some objects with accessor methods because if you make a typo right now it goes unnoticed. The thing is, the hashes also are the database. Some of them are like objects, but some of them are like the database. So we need to think about it. It's more a midterm goal, I think. For that, we will need to rewrite the whole code, I think. Probably. From an outsider's perspective we just had to do minor modifications for DAX. I actually found it quite pleasant to know that there's one big hash without information that goes this way and there's no entanglement of data and code which you get with objects. I think for simple tasks, batch processing which I'm basically doing, input, output, it's not a bad design decision to have a clear data-only structure. So, I don't know if it's true. But of course it's whoever broke into it, so that's the suggestion. An object with accessor is just a data structure or anything else. I also think that he's writing that it's just data. It doesn't have any intelligence in the data like in an object. I would think that something like the schema will be more appropriate for what we need. The reason I am struggling for objects is not to put some logic into this and some clever stuff but more to get a runtime syntax check. Well, we won't have that ever in parallel. Yes, we have it. For objects? No. Yes, for methods? For accessors? Yes. You have syntax check but it doesn't check if the method exists. Yes, at runtime. At runtime? Yeah. At least at runtime. With hashes you don't have the same. No, right. We don't even have that. Yeah. That's my goal. Not some clever stuff to do with objects, multiple inheritance or whatever. We may reach to that state someday in a couple of decades. No, I think that it's a good thing to do at some point to call it what you want but to make the structure more clear and call it not only rely on a hash but the thing is that it's too easy to make mistakes and it happened to me a lot what was the name of this hash? I said document it. That was a hit for me, I guess. And what does you and underline something mean? But the other thing I would like to think about if we want if we can is to how to go on the multiple not the thing that's Ryan working on but the thing about supporting more than one repository now at the same time. That's something that is bigger in the change but I think it's a lot useful for a lot of people to have more than one repository for example for them or for people that use Git we cannot be seen as one. We have been asked about Git several times already so there is demand for that even if Debian Pro Group doesn't confirm to Git which may not have ever happened. But it's not only the supporting Git comments but also supporting many Git repositories. That's the biggest problem. For that, it's not a problem to support many equally sized repositories many docs is one repository per package and you are doing that now? It doesn't matter because you only access either directories or files and then you just need to make sure you access the right directory a directory for the file. You never do any operation that actually acts on several packages at once at the moment. Listing packages for example. That's listing repositories. It's just a file directory list in the file. So having lots of repositories and it's already working basically. But the hard part is to have different repositories with different settings like a Git and a docs repository or docs repositories with different layouts. So I think I was always under assumption that that's what you mean with multi-repository. There is the thing about supporting different things one at a time and there's the other thing that is supporting any repositories of any kind at a time. Supporting several repositories of the same kind is no problem. As for docs already did. Yeah. We should go. Well, we didn't wrap up. Well, we can write a mail on the list and continue the discussion. Yeah, I'll read some minutes or something. It's quite chaotic at the moment. But anyway, and hopefully we can move from the current state which is not very good. Thanks everybody for attending. Your commands were very useful.