 So I had a plan, but it did not go well. I ended up with way too many slides, so I tried to just remove a lot. So now I only have like many, many, many slides, so I'll just clip over some and stay on some, okay? I'm kind of sick, so I may be rushing out at some point, but if that happens, I will be back, okay? So I'm Jill from the FNBISD Project, and most of them I put out on Twitter and GitHub. I come from the city of North in the West Coast of France. I've been using OpenBISD for about 20 years, and I'm an OpenBISD developer for 10 years now. I've also been using other BISD systems, mostly when I was a student, and I kind of like the philosophy of all BISD systems. I still have kind of special place for OpenBISD in my heart, so that's why I use it every day. I started working on SMTPD in 2007 as a personal project, absolutely not involved with OpenBISD. Then Henning, Raik, and Pierre-Yves kind of tricked me into turning it into OpenSMTPD, telling me that it would be a nice project, which it is, but it took kind of longer than expected. So I'm currently a lead developer for the Vant privy group, which is a platinum sponsor for the event. I will not do a lot of advertising for the company, but I still have to say their name at least twice so that they will send me to other events. We are hiring, including in my team, so if you are looking for a job, just maybe, okay? I used to work in the main industry doing research and development. I kind of distanced myself from this industry, so I would be able to work on stuff that would not be in contradiction with what I do at work. Vant privy has a few OpenSMTPD instances. I was not aware, I was not involved in the choice, and I was not hired to work on this, so I just had to run from Mike, which is here. I asked him if we were actually running it, and he told me yes. We have a few OpenBISD installs that kind of just profile over, I think. The other people in the OpenSMTPD crew are Eric, which is somewhere here. Sunil, Yong, we also send our diffs to third million just for review, and we receive a few contributions from the community, but sadly, mostly from the Linux community. We don't receive many contributions from other BISD systems, and there are not so many stable contributors. People often just send a diff, then move away. So we are kind of a small crew because even ourselves would not always have the time to work full-time on the SMTPD. So what is SMTPD and OpenSMTPD? A very fast overview. First, SMTPD is a simple, supposedly simple protocol to exchange messages between machines on the internet, but not only on the internet. It only takes charge of the transfer, not the retrieval of messages. So it's just a way to send a message from a machine A to a machine B, then the user, the end user, has to retrieve the message through another mean, usually pop, pop, IMAP, and such. It relies heavily on the DNS protocol, so you can't really have mail without DNS, and you can see the SMTPD network as a kind of graph where each mail exchanger is not on the graph and interacting either as a relay, receiving a message and forwarding it to a different place or as a destination for the message, or they can reject it, but that's a kind of different policy. The goal of an AMIX is to route a message closer to the destination, and usually what will happen is that it will see the next node as its own destination that doesn't necessarily know what's behind it, okay? So I kind of stole this from Eric's presentation a few years ago. This kind of shows like you have the user foo which sees relay as an exchanger, which sees another relay as an exchanger, et cetera, until it reaches my own relay there, and at each point, the relay doesn't know what's behind the next hop, okay? In addition, there are same poses, responsibility over messages, so you have to be kind of very careful about what you do with messages. You just can't lose a message that you have accepted. So in the protocol, you will have an annulader into the client that you have accepted the message. As soon as you have sent that annulader, you're not allowed to lose the message. You have to do something about it. So if the message gets lost for some reason, you have to at least annulage the original sender that it was lost, okay, for some reason. Each node that has an error has to be the one that annulages the original sender. So it doesn't, I mean, trickle back on all produce nodes. It should just, you have to pass just enough information for the last node to know how to notify the initial sender that something went wrong. It's the best interest of every postmaster to get rid of the message as soon as possible because for many reasons, the first one is that you don't want to be the node that lost the message due to a disk crash or anything. So if you get a message and can pass it really fast, it's your best policy. But there are also other reasons due to the fact that if you retain messages for longer, you have to have the hardware to do it and it has a cost, or at least being a male extension, it has a cost. And one thing that's less known is that the SMPTP protocol is a transactional protocol. An accepted message is actually a commit. So, oh, which is fine. This is a kind of session. So we have the first phase where we're just saying hi to the peer, then we have a transaction that actually starts at the mail from line. All the recipients are part of the same transaction that will share a common transaction identifier. There's a message that's been passed in the transaction. Then there's the server and all that message was accepted and this assigns a transaction ID, okay? All the recipients will share the same transaction ID which is often known as a message ID, but there can be differences in implementations. So a session spans from the connection to the disconnection and the transaction spans from the mail from to end of messages, but it also can be interrupted. Or you can have it interrupted by the server itself for whatever reason, maybe lack of disk space or anything. And you can have it interrupted by the client with some commands like mail, push a new transaction, or R-set, which does a reset. All the recipients share the same common message identifier that's what I said earlier. They will all receive the same copy of the message. There will not be duplication. The data part of the session will contain the exact value for every mail that had the same transaction ID. Then you can have multiple transactions within the same session. You just have to go through another phase of mail. So what is OpenSMTPD? It's a general purpose implementation of the SMTP protocol. It does both the server side and client side part of the protocol. In the server side implementation, what it does is accept messages from the network or local socket and insert it in its queue through relay or deliver. And as a client, it operates as a client taking a message from the queue and forwarding it to another page. It has some other duties that we'll see in the other slide. It doesn't follow the OpenBSD release cycle, but it aligns with what that means is that we don't have a release every six months because at some point we had the time to do it and now there aren't enough changes to justify a release every six months for us. But when we have a release, we just wait for the OpenBSD release to release it about the same time. And we carry the same version of the same version. So summarizing the two lines is that it accepts message from a unique socket or the network. It does some resolving of the addresses it received in the transaction. It manages our local queue of messages that we are not allowed to lose. It schedules messages for delivery. It has some retry logic in case there are temporary errors that require to retry. It relays messages to other hosts over the network or it delivers messages locally by executing a mail delivery agent. It is the user privilege. Mail delivery agent is whatever application that can read on the standard input and put the content somewhere for the user. Okay, we have a nice community with very helpful people on IRC and mailing list. We've been packaged through many systems. We honestly don't keep track of this mostly because Eric is kind of sound and I'm kind of sound too. So we often learn that we're reported to a distribution because we have a mentor joining the channel and asking us for help with the packaging. So that's how I discovered that I didn't know even exist just because they joined the OpenSMTPD community. Something that nice is that people are starting to choose to at least try OpenSMTPD whenever they start a new mail server and they are experimenting with their different solutions. That was not the case two years ago or two or three years ago. I kind of still go to forums to see what are the main complaints from users about the project. And I kind of see now people suggesting to try Post-Fix and OpenSMTPD as well, this kind of recommendation has increased over the two last years. We have a benefit of simplicity whenever the user experimenting with the MTA has a very simple use case and we perfectly fit the use case that tend to choose SMTPD just because the configuration file is so easy that people are lazy, so that's why it works. And sometimes we like to feature people go back to another MTA but that's okay for us because we prefer to have people coming to OpenSMTPD because they are satisfied with what they get and trying to get a good one at any cost and have them just swamp us with future requests. So it's been a while and the last talk was made by Eric in 2013 at AsiaBS.com. This is when we announced our first production release and we did not have any official talk system. We had sponsorship and worked on OpenSMTPD full-time back then which was kind of comfortable because we could just dive into anything as complex as it was knowing that we would have the time to finish it. I honestly can't list what we did but there's been like 44,000 commits on huge areas, reflectors that would have never taken place if we did not know that we had at least a month of free time to work on it. To provide a bit of context with Eric who were working on a scalable NCH for an ESP so an ESP is an email company sending emails for good and bad people. So we were dealing with a lot of traffic and they had very complex infrastructure based on prospects and they contacted me because they wanted to know if first we could simplify the infrastructure and second we could scale to the current infrastructure. What they had was about 30 post-pick servers but it was not for performance reasons. It was just that the setup was so complex that the easiest way was to split it into many instances. So we did a lot of ping-pong exchanges to see what they needed and everything and we ended up with the first proof of concept using OpenSMTPD which could replace the configuration file with a simple 20 line configuration file which was like a huge improvement for them and we proved that we could completely scale to the same volume as they did with the 30 post-pick instances versus one OpenSMTPD instance but we had to optimize and make a lot of ping-up because we were not production ready when the sponsorship began and we were kind of scared that they would cut it and we had to fix things live. So four months later, the people operating the infrastructure, they just were confident enough to start replacing the instance with OpenSMTPDs. So that was quite nice for us because we had the first real case of large volume setup sending mail to just everyone on the internet so all the cases of broken RSC implementation. After a year and a half, we decided that the general purpose of OpenSMTPD or post-pick source and mail was not the proper tool for their job which is not to accept a lot of mail but rather to route a lot of mail to the internet I go past some of the reasons but there's a lot of overhead with trying to not lose mail and to be atonic and commit inside the queue when the actual use case doesn't even need to have this. So we had to make a choice with Eric. Either we tried to make OpenSMTPDs so we could keep the sponsorship going which was not dishonest but not really in our best interest or just tell them, okay, let's end the sponsorship. We write a custom tool for you and we keep OpenSMTPD a general purpose NTA. So that's the things we did. It was not even a debate. We really had a discussion on the phone for five minutes and we both agreed that now the best interest for the project is just to end the sponsorship and keep OpenSMTPD unimpacted by design choices. So during the sponsorship of OpenSMTPD development we ran in a very high volume environment which made us hit possibly any kind of bug we could imagine from bottlenecks to servers and not completely respecting the RFC or having kind of artistic interpretations of the RFC. So which was nice because due to the high volume environment these kinds of bugs that could never hit in our own tests with the open source community they would trigger in minutes. So we would deploy an inversion it could break after five minutes we would start debugging and it was fixed. And we had to optimize pretty much everything from disk space to disk usage to CPU to memory usage etc. And what was really, really nice is that their sponsor completely respected the deal which was they would not interfere in how we worked with OpenSMTPD. So they basically told us make something that works. And they never had, I don't think someone even looked at the code besides Eric and I they just trusted us to deploy it and to fix if something was wrong. And since SMTP is such a nice addressable protocol for failures we could just let it fail. Find the time to not too long but we could spend a bit of time to think about the proper solution implemented and touch the proper solution. So everything happened in our main branch. So we did not have a fork because everything fixed for them was fixed for the community which was really nice. Finally about this consortium we had a long period of time because it was very, very intense to work full-time on the project and it's very hard to go back to just working a few hours here and there because you know that you will not, something that should have taken five days will take many weeks because we just don't have the time to spend on it. So it took a bit of time it was kind of frustrating but also we needed to just do something that's not related for a change. But after a while we resumed work on it because there's many things that are not part of their use case that are very interesting to us and that we can only do in OpenSNTTV which is a very nice testbed for experiments. So Marie, we worked that big NTA that's completely not open source and not general purpose. Many of the ideas were brought back to OpenSNTTV, there are something that are meaningful to us and something that outside of their own specific use case makes no sense. So it's kind of still benefits the knowledge from that other NTA kind of benefits to OpenSNTTV anyway. So that was something someone sent me, I like it. The goal of making email work again I tend to read a lot from the internet people, discouraging everyone from installing their own mail server because it's so hard and you should run it at Yahoo Gmail or Microsoft. And while I do believe that not everyone on earth should install a mail server, I think giving all this power to three or four big companies is not nothing and if the reason why is because installing a mail server is complex we should just make it simple. So in my opinion, the design we have today is quite good, it evolved a lot in the last few years. It was not perfect and it's not perfect but it's the design I would use if I started today. I don't own the copyright to this but I'll ask the fine guy who did this. So the design is quite good in my opinion today and with also the experience of writing that order NTA and Eric has also written a third one if I recall correctly. We cannot have enough feedback to know what was wrong in the initial way we thought about mail servers. If we started today, I think I'd go with the same design. I changed many, many things, many APIs but the design, the way we split the processes and everything, I would keep the way we did it today. It's an evolution, it came from many incremental developments and we still have many ways, many things we want to improve but it's mostly today about code patterns and APIs. We had an audit, I think many people have paid for it. I had a friend who worked at a major security company and asked me if he could do an audit over a few months and they did an audit which was very, to me very impressive and they found many, many, many bugs and most of it eventually turned into a Daniel of service on FNBSE due to both the design and the security mitigation mechanism that are part of FNBSE. So it clearly saved us from the catastrophe and we have fixed most, well, we have fixed all the critical issues and we have still a backlog of small improvements to make to prevent terrorist attacks that we are still working on. And since then we also improved further with new things that were introduced in FNBSE till we'll have talk about pledge and I'll have a couple slides about it too which are a good way to improve the design program. So FNBSE is a multi-processed demand and this is a PS output. As you see we have three users, we have seven processes, two who have bad names but this is going to change soon. This is pledge, separation and action and every process has a very well-defined set of tasks to achieve and this has been refined over the years and refined further when we introduced pledge which kind of exposed the pay evaluation in the years in Seattle. So what I like about the design why I say that I will keep the same is that because each of these processes has a very, very simple set of tasks to achieve and it makes sense for each process. The nuclear process only does look up things, the Q process only does Q things, the scheduling only does scheduling things and there's no out-passing this, there's no access to the past thing by any process that's not the Q, et cetera, et cetera and that was not the case. This is the evolution from trying to make it a better design. Except for the parent, there's no privileged process. We still have to have a privileged process because we bind the privileged part, we do system authentication and when we do deliveries, we do it in terms of the privileges of the end user so we have to be able to drop the edges to that user so we can get rid of it. We have some code pass that bypass the privileged process for instance, if you use, say, my SQL authentication, it doesn't need to go up to the root process so it will bypass and we'll never hit the privileged process. Except for look up and parent, all processes are running in the chute. We can chute look up because it needs access to files like resolve.com from possibly population file for my SQL or anything and one thing that we did not have at first but which I introduced a few years ago was to separate the privileges from the queue so that if a process facing the network was compromised, it would not be able to destroy files in the queue or open files in the queue. So we have a very tight restricted set of privileges. Most communication is done through IPC and we don't share memory between processes. We used to do it many, many, many years ago and we made an effort to get rid of that code and only use the imsh from any. Thank you. And processes are all planned and goes to the fault and reexact as we'll see. Most open SMTP core feature API. Open SMTP only accesses a very tiny interface to access any kind of core feature be it the scheduler or the queue. It allows us to test the code. It allows people to write new backends for a subsystem and it allows this to happen without our contribution. So someone could write say queue passgres pure and run it without having to have us add the dependency to anything. So I'll explain how it works. But this is the table back end. The table is the mechanism we use for all kinds of lookup. You have to implement config, open, update, close, lookup, set, operation and then you can have your own lookup on your own back end. This is not really redeveloped, it doesn't matter. The idea was to show that we have something called open SMTP extras with chip experiments through the API's work. So this is the table ready. Lookups of our registry, lookups of our passgres, lookups of our SQLite. As you see, it's a main function because we build standalone executables which is in its own memory space. And the plot is not really tricky. It's kind of simple and we have the same for the queue. So you can technically write a queue that's not using the file system and all you have to do is implement the 10 or source primitives that manages the lookup of envelopes and messages. We don't know where we'll implement our next bug. I honestly don't think I can write the code that's bug free. So the idea is that we have to take this into account and assume that the code will be broken and turn this into a known problem. So how do we do this? We look at the process, say the Pony process which handles the relaying for open SMTPD. It faces the network. It's kind of a serious attack surface for open SMTPD. So we will just assume that an ethical will manage to corrupt everything in it and pre-control the process. So what can we do to make this a known problem? So this is where we start dropping the privileges, shooting it and trying to see what our messages can send to other processes to do the dangerous stuff. So this process doesn't have to do it. So the idea is to really turn a remote code execution problem into a known problem. It will not be something that we enjoy but it will be something that will not turn into just a nightmare. And we prefer to have it then now to remote code execution or privileges escalation. So if we can make a code pass, detect that something is just not normal and abort exists, this is the way we do it. Because we'd rather have someone tell us oh, open SMTPD crash, here's the stack. We fix it, it's gone. Then having someone tell us oh, my server that fully compromised and well. So we try to raise the bar of attacks to make it as spent as possible for the attackers. We use, an example of this is we use the iMessage framework which allows message passing between open SMTPD and every route an interface that's for Mproch which adds types and checks over messages. So the iMessage framework plays to pass the structure or anything. It doesn't really verify much things about it. You can add your own checks but it doesn't do a lot of verification. We add just a layer on top so we can serialize the data that we pass in the iMessage and so it is just serialized on the other end and that anything that just doesn't deserialize correctly will just figure out how I made it. So this is an example on the same side, we create a message, we add an identifier inside it, we close the message, so we mean this is the message on the other side we try to unpack it. If there's something left after the get ID or if there's something left when we call the end function it's going to abort, if we get more or less data we abort and that's it. So it's kind of tricky now to corrupt the iMessage. Tio introduced the pledge system call which many of you probably know by now. The idea was to classify system calls into categories of system calls and allow a process to restrict which system calls would be used from a given point of execution. And any process violating the pledge will just be aborted in a way that you can try. So the general idea for open-vd-demand in general is that you have a lot of setup on the demand startup, then you have the event loop which does only a very limited subject of system call. So the idea is that you enable all system calls so the demand can properly setup. Then right before entering the event loop to just say no, no, now you can only do STDIO for memory allocation and that stuff. And it becomes a pattern that's really easy to follow and easy to develop. And what is nice about pledge, in my opinion, is that it lets you see that your assumptions about what your program does are correct. So we tend to use libraries and we often assume that the library is doing something and this tends to prove that your assumption is right because if you did not allow pledge to let the library do something, it will just abort your process. And it can expose a big layer of evaluation that's you doing something in a process and the pledge list seems just a bit wrong because it doesn't say the scheduler if I had to add a file system pledges to that process or something would be wrong. And if I don't add it and the process uses it, it gets aborted. So it lets you refactor the code to match your expectations. So we adapted pledge very early, thanks to Theo, who came to me and kindly asked me to do the change pass. We pledged most of OpenSNTV over nights but we initially did this very, very permissive pledges and we started reducing as we could refactor to new code as well. And many people see pledge as a security feature and this is how most people talk about pledge to me because you can prevent a shell call from executing something by having the proper pledge. But in my opinion, it's really a quality feature because it lets you see that something is wrong with the design and it lets you, if you prepare your pledge beforehand, you will see that something is wrong when you start adding system calls to your process that will make the demo crash. So nowadays, we have quite tight pledges and we even have pledges that are different depending on the code files we're taking because we know that we will no longer need something we can restrict further. Also, OpenBSD provides out-of-the-box ASR and randomized mallocs. So every time you run OpenSNTV, you get a different memory layout and children will make malloc calls. They will start diverging in their malloc calls, okay? For a privileged separation, we still have the parent process for many, many children and they all inherit the memory layout of the parent except for the malloc divergences. And that was not a bug, that was like how we thought it should be. And during your hackathon in Cambridge, KO called Eric and I in the corridor and kind of asked us if we could just re-execute the ASRs in SNTV after fork so that we get a new memory layout. So this randomizes everything again. And the global structures that were inherited by fork are no longer inherited by fork are overwritten with the new memory public of the program. So it would avoid some possible attacks and make things more random. So I thought it would be quite tricky and it fell at a very wrong time for me but Eric managed to do it really fast which is quite impressive. He just came to me with a small bit to show you and he had done everything, so it's nice. Basically, I started the parent process which will just do the bootstrap like it did before and instead of continuing the configuration with the inherited configuration first configuration. But it does just re-execute itself with the portscore get-up-et option so it can resume knowing where it had some. So now only the found descriptors that are required for the IPC are inherited. And I think I heard Theo ask us to possibly re-execute a process at runtime maybe now and then, so that might happen. I don't know when, but it might happen but it's not an easy deal. So I won't stay long on this but this shows the SLR and random ballad on open BSD with every address changing. You have the same with fork and it shows that some of the, say the malloc before the fork they retain the same address in the child. And when you add the exact, it's overrides the copy of the process memory so you now have completely different memory that are switching the parameters. So I originally wanted to go through all components of the penicin speedy because we did so much. I had made so many slides I had to just X most of it. We can have a chat about this after so I just talk about today's things. Code quality was the most important part for us in the last few years because we were running quite critical infrastructure so we had to be confident that nothing broke and also the same code goes to the community so if something is broken we get way too many nails so we want to be quite assured that the code we released is okay. Some errors are hard to test. The exhaustion is easy, the very exhaustion is easy, this space is too. Some open BSD had to send me a mail a while ago because he had a bag which was just not reproducible for me. He wrote to a session but he suspended his laptop and in some weird condition we figured the crash of the day, the fatal of the day. So these kinds of bugs are really hard for us to reproduce. So usually during development we test by having code just switch the condition, see how it breaks and revert it before the commit. Sometimes it's much harder and not obvious. So we can just do this but we'll find other ways. Every cleanup is this kind of scripting language, SMTP script which lets you script an SMTP session which is nice, it's not open SMTP, it's not tied to open SMTP, you can use this with other implementation. It lets us test that the SMTP server side has not had a regression. We tend to use this a lot before, we kind of less use it today because it's not the area we work the most on now. We relied a lot on code review, we changed this and we sent this to the main list and we exchanged it between ourselves. We used static analysis tools like poverty and the sea lungs can build. And we had another mechanism which I will explain in a few minutes. So these are the kinds of bugs that are triggered by the poverty and scandal. They just evade the human reading or at least mine. Oops, oops, oops. So I don't know if you guys recall this. Okay, so we came with a special branch in open SMTP. It was inspired by Netflix and it's Chaos Monkey. I don't know if everyone knows what it is but basically the idea is that your server infrastructure is supposed to be so reliable that you can shoot down servers at random and it should not be a problem. So they have this tool called Chaos Monkey which will just shoot down servers at random. And I thought it would be nice to have the same thing for SMTP because most error cases are supposed to yield temporary failures for the end users. There are errors that are meant to be fatal and you can't avoid this but they are usually done at the setup phase. And most errors during runtime are supposed to just result in a temporary failure so the order MTA will try again. So most system calls should be okay and the results in a temporary failure. So let's have some of them fail at random since this should not be an issue. So I introduced this special branch with two read write failures, loop of failures, and memory allocation failures. I added some latencies in iMessage processing just to see how it would go and we started fluting the monkey branch, okay. It did not go well. It took me a full day to fix all possible cases of the demo and exploring with this little fixer. A lot of the error code files led to fatal because we assumed at the beginning it was the best way to handle them but in many cases it was too harsh as a way to handle this particular error. So what we did was we first did a full fatal audit just to check where it was. Was it in the setup process or at runtime and once we eliminated a few we started resuming the monkey runs just to see if we could back again. So we turn a few fatal calls into temporary failures. We run the branch, we wait for it to just explode and fix and try again. And this is done when you can run the fluting for hours on this branch without issues. So it's not rocket science. I added a better implementation after but this was the one I could screenshot easily. Basically I add this monkey return that's randomized that's a randomized error and we add random points at specific places. And this is the kind of bugs it spots, bugs that are one-liner that you would never see because they are most always in the error code case since you're triggering errors that never happen. So what happened here is that we didn't have this return and the switch fell back to a fatal call. So we also used Twitter as a community choosing tool. We don't do this often but we do this at least once a year. We just make a call for people to just food us on a specific instance we have. And the idea that this puts a lot of pressure on the instance we run. And it lets us test things in a way that's kind of less saying that what we do ourselves on our own machine. Because you have a wide range of clients connecting to you. Some people write their own script, would or bad. And some are using TLS, some are running IPv6. Some have fast or low connection and they don't send you the same mess. So you have just absolute chaos coming to the server in an infinite loop for hours. And it's quite good because it lets us spot regressions quite fast. We won't do this too often because even when you tell the community to stop feeding, they will just continue. So we tend to do it only when we have like major change in a centipede layer or MCR layer and we can let the instance run for days without a mission. So our plans for the future, we have many, many plans. I will not go through all of them, it's just not doable. We have tickets that are open since over three years because we just don't have the money for it to handle them. First, project is to improve the configuration file. So this is the current default configuration file which is four lines. This is the one that's shipped to the OpenBSD by default. It does accept mail for your machine and lets you relay mail outside but it's not open for the outside to mail one of you. It's a known format. It's basically what was done 10 years ago with minor updates along the way. It's very simple and most people come to OpenBSD because of that file. Some come because of the design, et cetera, but most people just see a four lines config versus a hundred lines config and they pick this one. It reads almost as plain English. So it's an access for local agencies, delivery and that doesn't require too much thinking. And you can easily achieve a much more complex configuration files with not so many much lines. You can actually write it from scratch without editing an example file. Well, it gets a bit of being used to it but you can easily, you don't have to have a book to manage to make your own company. We're not going to go through this. This is just to show you a profilage file, the configuration file that I use for my own with TLS, Auth, multiple IP, multiple listeners, DKIM, primary domains, backup domains, a list of what I call the sheet hall where I put people that are in me so I don't receive the email anymore. And, well, this is 15 lines. So you can actually do pretty complex stuff easily. But there's a flaw with this. It came from an old assumption that one line would be awesome and it's the best solution to all programs to have everything fit in one line. But there's a difference between the condition and the action which I made in red and green so you can spot the difference. There's one thing that's handled when the mail enters the system and there's one thing that's handled when the mail is delivered. And the fact that you make it fit on one line turns this interesting thing into a kind of an atomic rule and you are no longer allowed to change it later because SMTPD would not know which rule, oh, sorry, which rule you match for you since it's no longer the same rule. So I won't spend much time on this because we can discuss this but it's a tricky issue. It's kind of related to SMTP being transactional and the way we thought this was just not right. It works, but it requires a lot of clubs which is today preventing us from making many progress because we always have to work around them. So the idea is that we should just accept that these are two separate things and that they should be split into separate concepts, an action and a rule and you match the rule and you enter the system and you match the action when you actually try to do something with the map. And this simple indirection and not many, many, many problems are listed here that we are doing because it's been quite requested feature. We have a very bad issue today which is that if you accept a message since we have mapped a rule when you accepted the message you can no longer change the action afterwards. They say you got your population file on while envelopes that were accepted are just going to go with the old one no matter if you change your population file. And there is no way with the trend that exists and this is the way to solve this. We have already done most of the work with Eric to switch to this model. We don't intend to release this soon. We intend to skip the upcoming OpenBSD release because it's too close for us to fully get it tested. But we are almost like 90% done with this. It's said a lot of time. It simplifies so many layers that it's just shown that the old one was not correct for the population file. As I said, we've been using high volume environments. The transfer layer is quite good with a bit done before. The layer is quite good but it's ground complex because we always faced new issues which had like intuitive solutions which piled up until we have now a complex layer that works that is a nightmare to work on. So we're going to clean up this. We had Dane which had been requested quite a few times. I had a part to do this. It was working but not committable because I just hacked something to see if it would work. So I need to bring this back to life and commit it. We have other features in the lookup process but we'll discuss this later. Quick troll on the OpenSSL. We have Raik who wrote the RSR-private step OpenSSL when they offered the hard bleep feature. This moves the danger squad outside of the process facing the network and people are now asking for ECDSA support so we have to do the same work which is just not trivial. We're going to be working on it but it requires to be the proper mindset to do a OpenSSL comment. I have a dream to actually kill OpenSSL direct support in OpenSSL PD because having to write this to instead of this is just not right to me. I prefer to have the simplest code for the TLS part. That would not mean that you can't use OpenSSL PD with OpenSSL just that we have to have a layer to abstract. We have to bring a libTLS wrapper on top of OpenSSL. This is not doable today without hurting our community. I often ask on the mailing list on Twitter who would be affecting if we did that. It's still too many people. The main reason is not just to promote LibreCell which it would be nice but that's not my main reason. The main reason is because we have IVDFs kind of everywhere to work around special distros that have disabled that option and because we had two or three cases of, they released a new date with patch level upgrade and they managed to slip an IPA change in it which broke for us and then suddenly we're swamped with mail of people telling us you broke something and we didn't change any code but we still have to find a work around this. That was the case of it that kind of pissed me off. And fighters, which is what I will be ending on. The future IPA is something that many people have been asking. It's allowed to alter the working opposition, inject data and inject recipients, et cetera, et cetera. People want them really badly. I keep getting messages about it and I know it's a top priority project but the problem is not the interface. The problem is how you plug the interface on it. So it's not because we don't want to use my mentor or NSCARs, it's because even if you wanted to do that, we have some things that we want in the API and we still get to do the planning. Among the many things we want in it, we want filters to run in different memory spaces, different users, shrooted, we want to be able to interact with any user input or output. And we have pretty much all of this but in the current design, we have something unfixable so we decide to just not try to hack it but to find a proper solution. And the problem is mainly because everything is entangled in the SNTP stack machine so whenever we try to do something in filters, we're going to break something and you need everyone to start looking at how to fix which is not easy. So our plan is to go from this. Antoine Jacoutot is running us every few days, months. With this, Antoine Jacoutot crying for all the shrolling he did the last few months. So we will be introducing SNTPFD which is a separate demo. Basically kind of an SMTP proxy which does a mail in the middle of SNTP session and it allows us to reuse all the code we already wrote for filters but to just have two separate state machines instead of trying to have a spaghetti one. Basically it receives row lines using a small protocol. It establishes a connection back to SNTP open SNTP and open SNTP has a layer controlling a client back to it. We'll have a talk because Eric is likely going to have to do a talk about this. So I won't enter too much into the detail but this is not just an idea. We have a working SNTPFD. We will publish it soon. We don't want, we can't release it right now obviously because it did not have enough testing. It has zero testing. But we will release it for testing shortly and our target is open ESD63. So Eric just learned that he's going to be doing a talk about it. And SNTPFD is not tied to open SNTPD so you can technically plug it to something else. So finally we have many small projects that we'll be having. So if you want an SPF feature I want a tool to automatically have CLS at startup and we are playing with extended SNTP extensions to make some experiments. So I can discuss this with you. So how to add this? Take the code, spot bugs, report them, contribute. Write new features, help close problem reports because we have many and many are clearly not very technical. They are just require your time. Donate to the open-based definition so we can have hackathons. And you can sponsor development of features and find sponsors for development of features. So that's nice. We're still hiring for one day and that's all. Are there any questions? Very fast one. UCP support, please. Ah. I'll do it for you. Thank you for the talk. Just a smaller remark. Don't use green and red on your slides. There is, some people cannot see that. Well, I'm okay but for the others. Do you plan on integrating the filtering in the real lady or is it sensible to do something like that? Yeah, because you basically plug it between the client and the SMTPD server. So, and you have the facility all already there. Maybe don't need another admin, I don't know. I only did not understand the question. Oh, sorry. I thought you could do filtering in relay D, maybe. Oh. Maybe, I don't know. Maybe don't need to write about it. Oh, sorry. Maybe we're talking with Wreck about it but I don't know if you want to cloud relay D with SMTP specific filtering. But the filtering demand is really not complex. What's complex is having both entangled and having two SMTP engines entangled. So the SMTPD code is really, really simple. Any more questions about the SPF feature? Do you think this will be a filter? Or is it the place where it should be? It's a common demand. I think it can be used with, it can be fixed, sorry, with the SPF feature that Theo wants me to write. But why not make it also a filter because it's not really hard to have an SPF filter. There was another one. Just for SMTPFD, do you plan to have it as incoming and outgoing in SMTPD? Well, initially I thought we needed both. Then Eric made a very clever comment that when you're accepting mail for the outgoing route, it's still incoming into an SMTP transaction. So you can have only the SMTP filtering and it will apply to outgoing mail. It's kind of counterintuitive but yeah, it works. Okay, when there are no more questions, I can say merci beaucoup.