 I would like to introduce Norbert Prynig. He is a developer in the tech live team. Many of you may not know about tech. Tech is instrumental in the background of printing, and actually many books were published in the last 20, 30, 40 years using tech or what became tech. So it's a very important topic still, and I'm sure Norbert can tell you some more about this. One, two, okay, thanks. So because when we say nowadays distributions, we often think of Linux distributions or macOS or whatever, actually we have a huge distribution here and I want to just show about how we and what I mean with we actually a two-man person, it's Calgary and me, are struggling with supporting a huge variety. So first of all, yeah, this is tech and this is not what we are talking about. So don't be scared, we don't go into details of what all this magic mystery there is because this is a pain, but you don't have to know this and this is not the topic. So what does it mean to be mighty platform? In our setting, it means we have a lot of architectures. The standard I86, 386, 64-bit, but also AMD, ARM architectures, IX and RISC architectures, all kind of stuff that runs around, paired with all kind of operating systems. Windows, Mac, Linux, PSDs in all the variants, Solaris pops up still surviving. We have ZigWin which runs more or less like a separate and all of these combinations. Just to give you an idea of what we are caring about. So Go compiler, the Go is considered quite multi-platform support. So this support, I checked today from the website what they are supporting is this free BSD, so 32 and 64-bit on standard architecture on Linux. They have a few more macOS and Windows. Another example would be LibreOffice. They also ship binaries for, well, quite reduced set, Linux 32, 64 macOS only on the newer one. Anyway, macOS on 32 doesn't exist anymore, actually, because I think you cannot install this anymore. And Windows 32. So what is TechLife doing? Well, I just give you an idea. So if I look into our binary directory, we have 452 programs. Of course, these are a lot of scripts and links or whatever, but actual binaries are 156. And I mean these are programs that use libraries like half-bass, font config, ICU library, all kind of stuff. And what we are supporting are all these combinations of libraries. So ZigWin on both free BSD, Linux on all kind of architectures, Linux Musil used by Alpine recently becoming more popular, macOS, NetBSD, Solaris, Windows. So I not even listed all those who have died the way in the last few years because we lost access to the hardware. I started getting involved in TechLife by building Alpha Linux on the Alpha Tech architecture Linux binaries for TechLife. HBBA, HBU. So that is what we are talking about. We are porting this to all, we are distributing this for all of them. So for those who has used Tech here, at least more than half. So they don't have to talk a lot. It's a typesetting language. It's while you have an input and you compile it to an output. The output is either some strange format DVI or nowadays PDF. It was developed by Knude in 1978, the first release. Well, he developed it only for his own typesetting. And just the funny coincidence, most people don't know. Tech had the paragraph format. Looking at a paragraph and breaking lines based on the whole paragraph like when it was released. Adobe introduced it like 2009 or 2010. Big news in InDesign. Finally, we have the paragraph format. By the way, it was open source. You just could have copied the code. It has superior mass typesetting properties. Actually, the OTF-Massphone standard is strongly influenced by the tech typesetting system. The OTF-Mass parameters are more or less one-to-one. Microsoft employed Don Knude as advisor for newer versions of Microsoft Word. So nowadays in Windows formula, you can enter tech codes. So, well, you see there was a huge influence. Probably the biggest influence of all this is not the typesetting. It is the language. It is the standard of communicating mathematical content. But it's much more than this. We don't have so much time. I had the browser open with the showcase of tech. I just came from the indigenous language panel. I mean, we have typesetting system used with tech from all the Arabic since 20 years or a lot of Arabic languages I said. I myself did a lot of support for Tibetan language, Sanskrit, complex languages. I mean, of course, Chinese, Japanese, Korean, CJK languages, graphics, chess, whatever you want. You can do it in tech. So a bit of history. So it was the tech life that we are talking about started from the four old tech CDs very old in 93. It was Sebastian Ratz, who unfortunately passed away two years ago, who built this up. It was originally based on T-Tech, Thomas Aesartech, the most seniors here probably remember this. And so 1996, the first tech life edition. That was actually a CD where you can boot from the, not run from the CD, everything. Just a few historic steps in 2000. In the 50th edition, we removed all the non-free stuff. So it is now DFHG3 more, well, without a few exceptions. In 2002, we added Mac OS support. Since 2006, when T-Tech was added, so support for open type phones, Carl Perry took over as the managing editor. In 2008, we had got a new infrastructure and that was the big change in our distribution. So that changed everything from one DVD once a year to daily updates. So that was a big change. And so we will discuss about this year also. Llewer Tech was introduced. And now since 2009, I would say we have a rather stable distribution. Of course, we have permanent updates because the background packages are updated. But we have cross release updates. The infrastructure, the package manager works quite stable. Well, always new features, but not too many. So what are the features? So it's big and it's complete. We include all the free stuff from Seaton. Seaton is the comprehensive tech archive network. It's like C-PAN, better known or C-RUN, C-RUN for R or PAN for Pearl or whatever. Currently, we package from the Seaton network about 3,500 packages with close to 7 GB of data. So this is what is created on our server and distributed over the net. It's multi-platform. I showed you already all the combinations we have. And it's uniform across. So we have one program that works the installer, the management program. Everything works across all these platforms. That was one of our aims. Everything works cross-platform. You can actually mount the same stuff on Windows via whatever. A Samba file system on the Unix system with NFS. You just have to use different binaries, but everything else works in the very same way. We have our own package manager, take-life manager. It's responsible for updates, backups or the configuration and stuff. We have now these practically daily backups on our server, the TAC server. We have a lot of background jobs. I will speak about this later. And I mentioned earlier it's DFS G3 with some exceptions due to... You want to showcase fonts, but the fonts are probably commercially... It would not be DFS G3 to ship this PDF, but in take-life we don't care about this. So how do we achieve this? Of course there's much more than this, but today I want to pick up three things. Because these are the lessons we have learned over time. So why it enabled us to become this multi-platform. This first is human and machine-readable configuration data. The end is important. Machine-readable, this is useful, but it's both useful. Actually what we use is a Debian packages file. It's like a paragraph. You have stanzas describing a package. It's line-based plain text file. Then the infrastructure implementation language. What do you guess? What language would you suggest? Have you seen the architectures I mentioned? No, there's only one. Perl. There is no other language available on all platforms we mentioned. No, there's just one. And then there's one step that we learned because before it wasn't like this. You need to go for a strict separation between static stuff that we provide and auto-generated stuff as a content. This allows for manageability of the source. Otherwise everything explodes. Let's look at in the detail. So human and machine-readable. Up before 2008, before I rewrote the infrastructure, we had this XML code. Yes, it is very nice. I know everyone likes XML. I don't like it because there is one reason with this. You cannot use, grab, set whatever nice tools and to mangle the stuff to change, to update it. You have for every single car, you have to use a C library. Lip XML. Lip XML. And this one is not available on all the platforms. So it is completely impossible to use. Because of course there is a Perl model. But the Perl model again goes back to the lip XML. So other problem that here was a huge mixture of static and generated content. And so, well, of course, the names, the size, and all these lists, they were all generated and only like the package name and, well, actually only the package name is enough that would have been necessary. So after this, since 2008, we have this input, an empty file. The only information is this package, CoolList. It's called CoolList. And from this, we generate output. This is automatically generated. Now you see also what I mean with this DBN package file. You have just the stanza key value. And, well, all the rest is actually generated automatically from the information that it is CoolList. We get revision from our subversion repository, descriptions from the tech catalog and all the data. So why did we choose the format? I mentioned it already. First, it's human readable. I can read it. I can edit it. I can fix it if there is something broken. We can use grab set whatever for all the cron jobs that are running in background that do the automatic testing updates and consistent inject. It's easily parsable in various languages. I mean, this is something you can write a parser. You mean shell. Actually, I wrote a parser in shell. You know, it's like trivial. It's easily extendable. You can just throw in new keys and that is not a problem. So just to mention your daily ground job will be grateful because I mean if you write a ground job that does whatever check on your distribution consistently checks is every all the files there is everything like LinkedIn style checks. It's easier because you just read over every line. Why not XML? Well, I mentioned it already. It's just we need lip XML XML pars. The binary needs the library and it's not available anywhere. So that was that was a big change. Actually before 2008, when I packaged it live for Debian, I was the only one who actually parsed the XML files. Everyone else was just dropping the files into the in the distributions in the Linux distribution dropping the files bum into the system and hoped everything works. But you actually had to parse the XML file, but that was well, no documentation. Nothing. So implementation language. Well, that's very short here. Why Pearl? It's everywhere and it's really everywhere. You know, there's windows is the only exception where we have to ship a pearl, but on every other system pearl is preinstalled. You don't have to think about it. It has a decent model system. You can write obfuscated code. Of course, I mean that's you can do in every language. But you don't have to. You can actually write decent model lyreist code. And it works on windows also nicely. Other option python is often mentioned and that actually there is some activity. The problem is python. It's first not available everywhere out of the box. And then well, I just see python two and python three while pink and see would be nice. It's portable. You can get it everywhere, but it's just a huge development time. I mean, you do most of the time you do text wrangling. You know, you're reading parsing lines and doing stuff. And so there's this. Well, it's much more work in C. If someone pays me, I would rewrite it in C, but not other than this. Okay. So static versus dynamic content. I mentioned this already and showed you an example. So what was the aim of this rewrite was, as I said already, we wanted to have a static content that is as minimal as possible. So nothing contained there that is superficial. So no duplication of data because every duplication creates inconsistencies. And so no additional files in sync. Our previous installer had, you had to generate from this XML files again list files because the installer was written in shell and a different installer for windows. And all of them had to pass some text files and couldn't pass the XML files. Not surprisingly. So, well, we had to generate content every time on every day. So this was not very nice. Of course, we also wanted to have single package updates. Like before, we had one DVD a year. Like we shipped one DVD to the members of the tech user groups. You could actually also download. This is not a problem. I see image now while you can install over the internet and every day you get your updates. Also important point, better documentation. We had the XML modules without any documentation. What kind of data structure is safe there in which format, which is now much more clear. So to give you an idea how this looks like. So this central database, it is the tech life database. It's a simple text file. It has revision numbers for single packages. Every package is one stanza, one paragraph separate by an empty line. Generated static content from tech life source files. And we enrich it with additional data from various sources, mainly from the tech catalog, which is based on the CETA network, all the information about packages. So, in basic, it looks like this. As I said, DBN package file, you have name, then the package name, and then the information related to it, then an empty line or more empty lines, and then the next stanza. So very easy, simple key value pairs, one group package and some meta configuration. I don't go into details. This is a typical example of a package. What are the information? So C7340, this is nothing like, we didn't say version, because version, this A0 poster has some version. The problem is the developers often have a strange idea of versioning. I mean, I just, for this, I checked SUSE releases, and SUSE has also very strange version number releases, like 15.2, 49, and then again 16. So it's not necessarily increasing, but for a real distribution, you want strictly increasing numbers, right? So what this is more or less, you take a look at the subversion repository, take the last change revision of all the files and the maximum of it. That gives you a unique, consistently increasing number of a package. If you change something, we have a subversion commit and then increases. We have here the version of the package maintainer, which is 1.22b, whatever that might be. The source code I mentioned already is an empty file. How this is generated is all from the subversion repository and the catalogue. And here are some more complicated examples with binaries. I told you that we support all these architectures, and now how do you do with binaries? If I install on my architecture, I don't need macOS or Windows or IX or whatever. So all the binaries, the actual binaries are split out into so-called subpackages or architecture packages like here. Bibtech is a typical example, or is a program, a binary program. It has a dependency Bibtech.org, and then we have for each architecture, we have, well, the respective files. And if you install, you can add an architecture, you can normal your architecture, the system you're installing to, these files automatically installed. But on a server, typically you can install three or five different architectures and mount them via NFS or Samba or whatever and get everything's running. So that this splitting made it possible. The input of this is a bit more complicated. Of course, here you see now a not empty source file. Of course, this has to generate, we have to put some information. So we have run, doc, we have some pattern language. I don't go into details, but the pattern language captures all the files that are there and captures all the binaries in kind of stuff. With various tricks, what we do is like we have a set of auto-generated patterns. So we don't want to repeat the same pattern. There is no reason why most of them are empty. It's just generate automatically a set of patterns. The architecture expansion tricks for Windows. Windows is always the biggest problem, of course, because the naming is very different. Well, at the end, what we end up, it's nice we have at the moment, as of today, we have 3,580 packages. Each one has a source file of which 2,800 are empty. It's just empty files. Every information is automatically reduced. And only for the binaries of special packages, there are some we have to have put some information in there. But this is reasonable. It's 300, there's nothing. For example, for every font package, we have to activate the font that it is actually usable. So if there's a font package, there's at least one pattern in there. So you see that reduced all this kind of stuff to a reasonable level. And this is, as I said, we are two people working on this, basically. You want to get rid of, automatize everything as far as possible. And that means no mixture of this content and stuff. For the total number, we have about 170,000 files in our distribution and 6 gigabytes is the current size of the, if you installed size. Of course, I mean we have a lot of people using this and updating it daily. So we go through a lot of testing in QA. So the biggest problem here is that we do for normal packages, like a tech, late-tech package, like Taboo, which is now stuck in DB and unstable with the fix. No, so there we cannot do a lot. This is actually a big to do. We want to have an automatic functionally test for all the tech packages. But this is a huge endeavor because this is not easy. Actually there is no automated way to test them all. Some things I have done on this direction, but we are not completely done. So we only check whether they can be installed on the system, but not whether the functionality works. For our infrastructure packages, we have of course separated them completely. So we have a dedicated repository that is people who want to live on the edge and try out new features, they can try to get updates here through TechLife Critical. We have recover programs for Windows and Shell Expansion for all the kind of unique stuff so that if, well, sometimes the TechLife Manager breaks down completely, then you can recover here. And then we have combined before we push out to our mirrors. We have the Seed and Mirror network. We do a lot of checks. Actually before we do this, we do check all packages and all files for consistency, whether, well, files are double-included, packages are not mentioned somewhere or packages are mentioned that are not existing, all this kind of stuff. Execute statements. These actions are executed automatically after installation. So think about post-instalt script in Debian or RPM, whatever. You have some post-installation procedure. We have this also here. It's like rebuilding a format to activating fonts and this kind of stuff. We have consistent to check on all the dependencies, of course, because otherwise, well, TechLife Manager. And at the end, we do a full installation of the 6GB and only if this works out and afterwards we can run TechLife Manager at least once, then we declare, okay, today we ship out to Seedon and that means to all the users. And that means many of these things are, as I mentioned before, this is all cron jobs and they are all like scripts. And many of the scripts are just crap, set, sort, unique, whatever you had always do on this kind of stuff. And if you have a text-based database, then it's quite nice. So what errors are normally called, so new packages, we add packages but forget them into a collection, inconsistency with respect to font maps that often happens, typos or whatever, then package corruptions. I mean we package them with tar and XZ and, well, sometimes things like some corruption happens, but these are captured, duplication of, duplicated inclusion in the package, and we're correcting Perl models. Sometimes you just, well, make a bug in there and then nothing works, but all these errors are normally, okay. Only short, I want to go over distributing the stuff. So we have the TechLife distribution. This is what we are doing. We have our own installer front-end, everything. This is what you have seen before the architectures. Your redistribution is very common. On Unix, it's practically all systems now use TechLife. Tech is dead since like 10 years now. Windows, there is one more. There's Mixtech and TechLife. There are two systems. Mixtech is developed independently. And MacTech is on Mac, but this is TechLife. Under a name is a nice installer and a nice front-end. The current status, as far as I could check, in DBM we have for Buster we get 2018. Well, the old versions are, well, as you see, it goes back to here. Ubuntu has also more or less 2018. They are all up to date. Fedora is hard to check because there is not really a list when they are up, but currently like, they had 2016 for Fedora 27, 28 and 2018. So that's, there are some more. Suze is, I cannot grasp what LEAP and Tumbleweed and the numbers are, but they have an increasing number of TechLife versions. So it seems they are up to date. MacOS is always at the same time released as with TechLife. And on Windows, we ship also on our DVD. We still ship DVDs. No, it's not that we own the internet. We ship also DVDs. We have protext, upstream and mixtech are included on our DVDs. So lessons learned. Well, for us what was very important is separating the static static from the generated content because that helped to reduce like 3,500 files you don't want to fix manually, but it's only 20 or 30 where you have to fix something that is reasonable. You have to automate all the testing and that is important that you have also machine-readable stuff. Easy. Cross-part from is a pain especially when you try to support Windows. Well, well hidden bugs, cascading is what I meant when we do a separate layer for the critical infrastructure. We have the full API documented and try to be stable. So distributors use this code often. And yeah, switching from DVD to net well was a huge part. That's obvious. Yeah. Okay, I mentioned this. Oops, some resources. So the main page is on the TAC or Actech Live that you get all the information, downloading stuff. There is a web view on our subversion repository. If you're interested, there's also anonymous access for download. I also carry a Git subversion mirror, but if you can only be careful it's now close to 40 gigabytes. The history goes back to 2005. And yeah, well, that adds up over the time. Finally, if you want to contact us, take life attack.org is the mailing list. I mean, everyone can write there without any problem. We have a mailing list dedicated to distributors. So like Linux distributors who want to integrate them. We have also a security mailing list that is more or less closed. Where security reports are handled and we have some internal mailing list. But I mean the main list here or the digital distributors is the most. And always you can also contact myself too. Okay, thanks for the attention. All right, so any questions there from the audience? Since you have so many platforms to support, what do you use for running your automated tests and CI framework? Everything in-house? Everything manual. We have manual people caring for the building. The problem is building this huge part of the source code. And it's getting more and more complex because many of the newer libraries require C++ levels. So for example, for the main E386 and 46-bit binaries, we build normally on Travis CI and take the binaries from there. And before, I used VZ, I think. Because the problem is also you have to use a sufficiently old Lipsy version. Because if you have a two-year Lipsy version, the nothing runs, then you have to compile all the, if possible, all libraries static in there. So everything you want to do, this is for the base, for the standard, like E86 Linux and 466, we can use the Alpin on muscle. We use the Travis CI. But all the others are handmade by people having machines, compiling the stuff. So we are now in freeze for take life 2019. And it's normally like two months of repeated fixes, build fixing. Now Windows is a world apart. And the problem is all the CI service, they don't provide images for most of the architecture. So that's the problem. We could not even set it up there. And then they have the time frame cut off. Very good. Any further questions? Anyone? So I've, okay, one at the back here. I just want to know, you say that you pushed to CTAN for users to download. I was just curious, how many downloads do you actually have? We don't know because we cannot know. We don't phone home with our own programs. And the servers, the mirrors are not managed by us. So the CTAN network has a lot of mirrors that are distributed over the world. And it's often a service by companies or universities or whatever service who provide CTAN web space as a mirror. But we don't have access to their logs. So we have no idea. I would be interested, of course I would be more than interested. I'm always tempted to write a phone home into take life manager, but I don't do it of course. It's just like a one time first time you launch and then you can click don't show me again. But then you get everybody who wanted to provide a survey. It's a few questions. And then you have a bit of an idea. As you pop up a dialogue box, if you click yes then you are sent to a website for answering yes or no or you send off. Might be, but most people in our community are very privacy concerned. So they are not very happy about it. But it's a good idea actually. Thanks. Yeah, it's a good idea. Okay. Thank you everyone. So one final observation is most technical scientists who graduate, they half of them use tech, like for their thesis, and half of them use Microsoft Word. It seems to be a polarised area. Anyway, let's thank Norbert once again and then we can move on to our next talk. Thank you.