 Okay. So, like Fabia mentioned already, I've been doing a number of tools in testing PyTest and then after some time I decided, okay, there's this whole thing about unit tests and PyTest and NOS tests and whatnot. And actually it would be nice to have a really unifying experience when running tests against the Python application. So that's why I went for also writing talks, which is kind of like a meta test runner. And that can actually invoke NOS tests or unit tests or PyTest. And after a while I thought, yeah, that's all very nice. But the real problem is when you want to have something like quality assurance in your projects, it's really also about release management. So you actually have several packages, dependencies, and I have that with my own open source projects, but also with people and companies I consult for. And that's why I also went for basically the next level to have something that manages the packages that also get tested. But all the time coming very much from this kind of like QA and testing perspective. So that's when DevPy actually was born. The DevPy system is basically there to help you with PyPy related release workflows and quality assurance. It currently consists in version 2.0 of three main components, which is the core DevPy server. I'm going to talk about all of these components in detail. The server that actually provides the PyPy caching index and your private indexes, where you might not actually want to publish from, but you actually want to use that within your organization. The recently released is the DevPy web plug-in, which provides web interfaces also for your documentation, a few other things and search across metadata and documentation. And then there's the third thing that you don't have to use actually, but it's helpful if you have to deal with development and production indexes and so on. And that's a command line tool that basically drives the well-known other tools like PIP and easy install and setup.py upload and things like this. So DevPy served indexes. One of the main purposes at the beginning that was before PyPy or grew a content delivery network was that you can have a local self-updating PyPy cache. So you basically work against your local index. If a package is not there, it goes off to PyPy or grabs it. And the next time you don't even need to be online. You don't even need to have online connectivity. It will just satisfy everything completely offline from your local cache. So everything that you install basically gets cached, including the index information, and it uses the change leg protocol with PyPy so that from time to time it asks PyPy, is there anything new for the projects I care for? If so, it basically invalidates the cache. So the next time you ask, it's going to update the cache. As with every cache, cache invalidation is a very important topic, and this is actually using the official PEP 381 API. It also manages multiple private indexes for you if you want to implement staging. And each of these indexes supports running against it with PyPy or easy install or build out. And it supports the typical setup Py upload, upload docs, and so on commands, how you can then get packages into DevPy. Staging, there's one feature that distinguishes DevPI from other indexes that you may know in that it provides an aggregation or inheritance feature. So here, this is one possible layout that some people use. You have the so-called root PyPI, that's the cache I talked about. You can directly use that if you don't care for private indexes and forget about the rest. But here we actually have a production index which contains the private indexes, the private packages that you don't want to publish on PyPy from org, which might depend on PyPI release files that you don't have in your private index. So you may have a web application that depends on Pyramid and Pyramid depends on lots of other things and those all come from root PyPI. But if you work against the company production index, you're going to see one unified view of your private packages and all of the PyPI, Py from org packages. And then if you want to do some kind of QA workflow, you also can do a development index, for example, team-based, that's what some companies are doing. And there you just put your in-development releases that are not ready to be deployed on your web servers maybe, but they can be used for further testing. And one important thing here is that your production index is actually somewhat protected from malicious PyPI packages. And I'm going to tell this, which is also interesting, if you don't use DevPI, something which I call the higher version attack, there's also variants of this attack. Let's say you have a credit card release file that contains your credit card processing in your web application. You put this on a private index and somebody, that's the attacker actually, uploads credit cards with a slightly higher version number to PyPI. Now if I install against the production index that inherits from root PyPI with this install command, I'm actually going to get the PyPI version, because I didn't know that somebody actually went and occupied my private name on PyPI. PyPI is a package wiki. Anybody can basically publish any kind of package. So if you have private package names that are not yet registered at PyPI, somebody can go there and do that. It's very easy. And I don't know. I didn't try myself, but I'm pretty sure I could get, I guess, something like 100 bots per day or so with something like this. That's not the only problem that is there, but I'm just saying that if you have something that somehow merges the world of the PyPI, Py from org wiki with your private indexes, then you get into this kind of problem. And that's also the case, actually, if you forget about the FP server, also the problem if you use pip install extra index URL, because then the merging is actually done on the client side, but it does exactly that. It actually takes the higher version. So you thought you install something from your private index, but you're actually installing something from PyPI. So that's a bit of a problem. DefPI in version 2.0 prevents that because it says by default, if you upload anything to a DefPI private index, any kind of further lookup, even if you inherit from the PyPI cache, will be prohibited. And you have to white list it. If you actually have a package that comes from PyPI Py from org because it's an open source release of your company, then you have to white list it. Otherwise, all PyPI is ignored, basically. If you basically install from the production index credit cards and it's not white listed. So by default, PyPI is not considered because there is a package in your private index. So it's basically trying to prevent this kind of error. That's not the only way if you want to be a bit more careful because there's other attacks. For example, if you have typos, somebody in your company on the laptops installing Pyramid without a D at the end, or what I do sometimes pip install PyTest. So if you want to get hold of my machine, it's very easy because you just need to register the package PyTest without the T. For some reason, I sometimes forget this last letter. It's not currently registered, so it's a good chance you get my machine. So if you actually want to, this is really a problem because, I mean, you can imagine there's some very popular packages. If you register variants of this kind of package names, you will eventually, from the millions of users, literally across the world, you will get some people actually. And I checked with the PyPI admins. You can see that in the server logs, there are actually a lot of instances of mistyped things, so it's clear you can actually exploit that. Okay, but this is not about attack vectors against PyPI. Would be a fun talk by itself. This is about, if you want to be more careful, then you probably should not inherit directly, but you rather have root PyPI as the self-updating cache, and you work with that in development. But then, when you want to have a package in your, including dependencies in your company, you actually push it explicitly into your production index. And, sorry, in your, into your development index, right? And then, basically, you just push packages around the indices, and that's something that DefPI makes easier, or somewhat easier. And you upload your own packages to company def, and you won't have any kind of these attack problems, like typos, and so on. Suddenly, if people, your production machines cannot be easily compromised. Okay, this is just some background on how you can organize and what you might want to be careful about regarding indexes. The way how you can organize indexes for your teams, and also maybe platform-specific indexes that contain wheels for your deployment platforms, and so on. There's several variants about this, and kind of best practices in merging, which are not yet documented, but this is kind of a start on this. So one feature that came up, came out last week, actually, is replication. Because that's what one funding company, who actually gave some money for development, for the open source development, wanted to have. Is that you can now run a defpy server in replication mode. That means the first command actually starts the server. It's the full command that you run on port 3000, and then you start replica somewhere else. You, in this case, I just run it also on localhost. I specify that the server state goes into a separate directory, the replica one. And then I say, okay, my master actually is this. So the second invocation actually starts a replication instance. And this works by HTTP between the replica and the master, and it maintains a full fail over copy. So that when you actually upload something to the master, it's, you can also upload something to the replica. It has the full interface. And that will only complete if the package is also at the master. So at any point in time where you upload something, you will have it at least on two hosts. And all rights, it's kind of like a simplified replication model. Always go through the master. And that kind of seems to work quite well already. Although there might be some buckets out loud last week. I've been running it myself in instances. And now some companies are starting to use the replication also in their settings. The defpy web is the second big feature that came out last week, from mostly implemented from Florian, where is he? There. We have a refactored defpy to use pyramid everywhere. And defpy web actually is a very nice web interface now that shows you metadata and summary information, description, and documentation. So it's your basically read the docs in the company, basically server as well. And maybe I show that quickly. So this is my semi public instance. This is like, for example, my development index. And one of the things you see that, for example, the defpy server 201 release we did, that's the release file. And here you see tests that were performed on the various, on the truth platforms here, RIN32 and Linux on the different interpreters. And I can basically look into that and see that this was executed. And the same way, of course, I would see if there's a failure somewhere. Also, if I have documentation, I can go in here or I can just say, show me, okay, what do you know about defpy and Jenkins? And that's a full index, a full defpy server search. And then I see, okay, there's some links to that. And I get to the integration part with Jenkins on the defpy documentation. And that is just there because I uploaded the documentation to the index. It gets unpacked, you get URLs for that. And it's indexed in the search. So that's also quite powerful facility. So the last component is defpy client. It's a relatively thin wrapper around PIP and some setup py invocations. It also performs the actual upload, so it always uses SSL and some other bits. And it maintains on your local machine any kind of log in information. So you basically say, okay, I'll log in and then I use a certain index. I upload something and then I don't need to re-log in all the time because that token I get from the server is going to be valid for ten hours. And defpy client basically stores this temporary authentication information. It also has experimental support now for SSL client certifications. If you want to step up your scenario to have encryption and authentication through SSL. The commands that defpy client offers are used to actually set the index you want to work on development or just root pypi or your production server. Upload is for helping you with the uploading files and docs and so on from a checkout. Test is the one that produces the test, it invokes docs actually. And push is the operation that actually pushes a release, including all of its documentation and release files from one index to the other. And PIP or other installers you can just directly use. Then there's some configuration administration commands that you can use for index configuration, user configuration. And also accessing the JSON interface. So defpy server has a full JSON interface on all of the resources that you can use for scripting. A typical release workflow looks like this. You basically go to your development index, you upload a release file. Either you implicitly build because you are in the setup.py directory. You just implicitly built with defpy upload. Or you already have built your release file than just say defpy upload. This release file and you send it off to the index. And then from the same machine or from all kinds of other machines that you might manage with Jenkins or something, you issue this single line defpy test package name and that actually gets the latest release and performs the tests and attaches the test results back to the release file. That's why I could see in this web view, okay, this release file what kind of tests has it seen that was produced by this client side defpy test command. And when it's ready, actually, when you're happy, then you push it to another index. And of course, you can also automate this kind of like Jenkins job and just invoke these commands to on success of something posted to an index that says these are all the tests that test passing packages and things like this. So this is a release file working that gets slight shortly into talks. Talks is a tool that allows to define how you want to, what kind of tests you want to do against your release file. It's basically in the release file it expects to find a talks. I and I and then it invokes talks. I have the next slide discusses what that means. It produces something called talksresult.json and then I can actually from the command line I can say defpy list the package name and see what the status is. If it was tests passed or what kind of test failures there were and show me the trace back from the command line. And then I take the release file once I'm happy with it. This is then pushed bit by bit verbatim to the next index. So I know that this thing I actually tested against on the different platforms actually works and I put this thing. I don't basically re-upload something to production. I really take the same thing that works and push it through to the next stage. Talks for automating test runs. It's kind of a standardized testing. I'm not going to talk much about this because my slot was exchanged for a 30 minute talk. It was originally a 45 minute talk. Was scheduled wrongly so I can't talk too much about it here. But you can go to the web page to actually get some more information about how you configure your test runs with different test runners. The server you already saw that you basically just install defpy server, you have the typical host port and some other settings that you can and the data idea where you want to have your server state. And then from different clients that don't need to install defpy server of course you can just install defpy client and then say defpy use my company server and just work against that. What you usually want to do is that you want to have an engine X space deployment. There's an example file that gets generated from your settings, host port and so on and so on. Which is basically a more or less complete engine X or basic engine X side config file that you can just include in your engine X configuration or use as a template to work further from. And this actually happens in such a way that engine X directly serves the static files. So some things actually defpy server doesn't see anymore. Once you upload something, the whole URL structure is such that the engine X directly serves that file so for that defpy server doesn't even need to be running. So I'm going to conclude the defpy systems developed since about a bit more than a year I think. A year and a couple of months. It's MIT licensed. It's test driven development a lot, surprisingly. And also it's a bit funding driven. So there's some users cases that are interesting to me, myself personally. But it also depends. I mean one of the upcoming things maybe is a company who funds some LDAP integration, authentication integration, but kind of like feature development and some things and consulting is provided by Florian and me. And of course, pull requests are a good way to contribute. Okay, that's my brief overview of our defpy on talks. Thank you. Okay, we have a good five minutes of questions. You just briefly talked about LDAP authentication. Does that mean that you can integrate defpy into an active directory domain and use this information to authenticate users? Well, if the funding realizes, I guess so. Then I'll ask my employee if he can give you some money. I'm sorry? Yes, I mean a sprint or something like this is also possible. But even the sprint, I mean, takes some time and organization. And in order to get something released ready and documented and everything, I mean, you probably know that it's kind of some work involved, right? And but just to give you a brief idea on how the feature discussion around LDAP is currently such that we say we want to have, we basically want to have EngineX deal with LDAP server integration. And just pass a certain username header and group header into defpy server and basically have an optional defpy server that just says, okay, my upstream EngineX is going to pass me the right thing. And EngineX does the integration because there's nice plug-ins for EngineX that actually do this. And then we need some client-side support to handle the login part. But that's kind of like the current implementation plan. The other, the alternative obviously is to actually have direct LDAP support and the FPI server itself. But well, we don't have to reinvent every wheel, I guess. Yes? Hi, thanks for all this hard work you've done. And the question is about testing run by defpy server. In particular, is it possible to configure some workers, which are remote to the server itself? Because it's a bit kind of overload for the server? Yes, I mean, maybe I wasn't clear enough the server and the running of the tests, for example, they are completely separated. So where you issue defpy test is completely separate from where the server runs. The defpy test command goes to the server and gets the files, performs the testing on whatever host and then attaches back the test result. So on the defpy server instance itself, where the server runs, there's nothing. There's no setup.py or anything ever executing. Otherwise it would be bogged by, I mean, if you have to execute something like setup.py, you basically run risk of compromise. Yes? No, no, the pushing is really after you test that you test. Like what you saw in this, the upload. Well, the upload you also do on the client machine. I mean, the client machine does the building and you do a wheel, for example, for Linux Ubuntu 14.4 64-bit, blah, blah, UCS2, whatever your platform is. And then you actually upload the resulting file to maybe a platform specific real index. No, it doesn't, although there is an upload trigger. So if you upload, you can define on a per index basis. I mean, I didn't talk about all the features. You can per index, actually if you upload something, it can, for example, trigger a Jenkins job. That's kind of like one path that is documented. I showed it, you just go to the documentation and then the MISC section about the Jenkins integration. Okay, you already answered my question. I already have about a plugin system or signaling stuff like this Jenkins plugin. Is it already as generic as I can maybe generate Debian files from this upload trigger? No, I mean, DevPy tries to solve a few problems, but only those. It's not yet something like, it doesn't have like all kinds of events. It has this upload trigger for Jenkins, but not a generic web hook, whatever. So, I mean, that's not very hard to do, but it's basically very much, DevPy is very much driven by actual real world use cases, not by all the features I can possibly think of or so. So when somebody actually comes along and wants to have a certain feature and discusses the use case, it's much more likely that it gets implemented. That's kind of like my general development approach these days. Okay, one more, or okay, then that's it. Thank you very much. Thank you.