 support up based on trace together the New South Wales contact tracing teams like no no no our grandparents did contact tracing without technology therefore we will too. The fact that it takes several days and the fact that that means that infected people get to infect more people didn't apparently occur to them as a result. New South Wales and most of Australia have much much longer lockdowns than Singapore experienced. There's also the anti everything the sort of anarchist type people who don't like trace together the tracing support masks hand washing vaccines. Anything at all is because an authority proposed it therefore they will oppose it rather than rationally evaluating whether the idea was a good one. This group exists it's real enough that's the most of the driver for most of the market rather for alternative facts. Their existence limits public health policy responses and therefore limits public health outcomes to the which has shown up to be terribly dangerous for the populations of the US and the UK particular. A bit closer to home is the sort of techno libertarian ideal that hey we can get rid of centralized government by putting everything into apps and decentralized systems. And on this occasion this has infected Apple. So without going all the way into it for trace together to work quickly efficiently and well on iPhones would have needed a change from Apple. Apple did initially coordinate on preparing such a change but sort of halfway along suddenly decided that actually they much preferred to hate governments and instead produced the exposure notification system. Which is specifically designed to not be useful for contract tracing the only thing it does is allows a person who has been near someone who has published the fact that they were infected to learn that they might have been exposed. But it provides and it's designed to make it impossible to provide any information to contract races. So it's a great anti authority thing but it turns out to be not true that a bunch of people with apps in their phones can replace the functions of a contract tracing term in a health authority surprise surprise and see also my slide. So the question is to strike a balance. The approach that was taken in Singapore and I think it's wise was to decentralize the proximity logging the proximity information that's detected it stays on your phone or token until or unless you're diagnosed positive. If it's not, it's after 25 days it just gets deleted. However, they did decide to centralize the contact information to allow contact traces and ambulances and others to contact the person quickly and effectively. Notice that it was they've centralized all contact information, not just the contact information for people who tested positive. It is a fact that difference that my proposal addresses. The other thing that happened was that a bit later on they realized that it was desirable to also record ID numbers. And the reason for that was not well explained, but it's tied up with what happens when you when you have either a blood sample or a nasal swab tested for SARS-CoV-2 or for COVID. The sample is placed in a seal bag, which has a sticker on it, which has your name, your date of birth and your ID number on it and the barcode of the same information. It does not have on it the serial number from TraceDL. The stuff that goes on for health protocols is your name, your date of birth and your ID number. And so what putting the ID number into TraceDG that made possible was faster connecting of health test results with the activities of contact tracing teams. So they could contact affected people, they suspected contacts faster, get them isolated faster, reduce the amount of infection, reduce the amount of economic impact. Some people get a bit upset about this and think about it as like, you know, scope creep. It's like, hey, we agreed to this and now you've taken this other thing. That's true as far as it goes, but it's also the case that rational daily minimization means you start with the smallest thing you need and you're only going back and ask for the next thing when you have a clear reason for using it as is the case here. The fact that the scope expanded isn't by itself evidence of a problem. In fact, in the particular case of TraceDG, every step was carefully justified and carefully controlled. So what we're left with is this centralized database which contains the name, the phone number, the ID number and the TraceDG identifier for something like 90% of Singapore's population. That's a pretty scary thing. Ideally it shouldn't exist. It exposes two risks. One is an insider abuse risk that in theory, some sort of criminal inside government could get their hands on the database and use it as part of some scheme to unlawfully track individuals. And the other is a breach risk. Every time a database exists, there's a risk that an outsider will get their hands on it. If we eliminate or at least drastically change the form of that central database, we can reduce both of these risks and that's what I'm going to propose. The basic idea, the observation is that contact traces only ever access a tiny fraction of the contact and identity databases. It's less than 10%. If that was not the case, if they needed to access all of it at some point, then this approach wouldn't work. But the fact that throughout the life of the program, the contact traces will only ever access a small fraction of the data means that there's a possibility to introduce a different way of thinking about the problem. And that is to incorporate or to insert an honest broker which receives the registration information but doesn't hand it over to the authorities unless they're willing to stay on the record that they need it for contact tracing purposes. And then when they do that, it will notify the person whose details were accessed that they've been accessed and it will also periodically publish, perhaps daily publish access statistics. So where do we find, sorry, before I do it, having an honest broker in place doesn't forcibly prevent insider abuse, but it does introduce some involuntary transparency. That is an insider whose thinking of abusing now knows that in order to abuse, they're going to be, it's going to be visible and it's going to be visible to the person whose data they're accessing or people who access many people's data. So that tends to disincentivize the act. So it's about discouraging by changing the potential abuses and motivations in the first place. If the fact that they're going to be caught is part of the course, then they might not do it. It also, as a side effect, necessarily shapes the database into a form that's harder for outsiders to breach. I'll get into how that is in the moment. You wouldn't pay the cost to do this if you weren't building an honest broker, but if you are, then this is a handy side effect. So what are we going to use? We're going to use something that the Free Software Foundation calls treacherous computing. It's usually called trusted computing or trusted processing module. I feel that FSF is sometimes a bit, you know, excited with the language, but I think this time and I've got it right. The whole point of the TPMs was to limit your phone or your PC's ability to work for you. So an immensely powerful billion dollar corporation could decide what you could do with your phone, which is bad. However, we can use exactly the same technology to limit what a government can do with your data on their computers on behalf of the population, which I would suggest is excellent. So, concretely, I was talking of course about Netflix. So Netflix will talk to the TPM in your phone or your PC and they will decide whether or not to send the stream you want to watch to your computer based upon whether the TPM can determine that you're running a player that they trust. This is why Firefox has EME in it so it can play Netflix specifically. Netflix will not send the stream if there isn't the trusted code running. Each of the TPM to check that where it's available. So let's turn that around. Let's have the database live in a computer that contains a TPM, obviously with government support, and then decide not to send data to the government computer unless it can prove that it's running the same code in the broker that it's published. Granted, you allow the app to perform the authentication and the withholding on your behalf. However, the broker can be authenticated independently of the app. That is, third-party experts can directly query the broker online and verify that it really is running the same code that the government body is producing it has published. And then the code can be examined and it's function explored. So, how does all this work? Oh, dear. I don't know if you can see the diagram properly. The lines are all a bit thin. So here is the app at the top left. The broker is at the top right. So the idea is that the app sends a registration message to the broker at the time that you register and perhaps more frequently. That includes an ephemeral trace-together identifier, the phone number encrypted with a key that only the health authority can read, not the broker, and also the ID number encrypted with a key that only the health authority can read, not the broker. So the database stays in the broker. The broker does not have the keys, even if the broker is breached, the data is meaningless. In the event that a contact tracer is willing to state on the record that they need to identify a specific individual as a potential contact of an affected person, they send a request to the broker. The broker sends back. So the request is to identify a particular ID and they also provide a reason, a message that has to go to the user. What they get back is the encrypted phone number and the encrypted ID number. And then the contact tracer has people within the health authority have the key to decrypt that. The broker immediately notifies the app and includes the reason message. So the fact of the health authority having accessed the user's data is something that the user becomes aware of immediately, using the Android or iOS notification facility. All the third party stuff if you're using a non-Android, a non-Google services Android. And then similarly the broker can, through a variety of mechanisms, publish aggregate stats each day to say, hey, you know, today the contact tracer's looked at 10,000 people's records, or for some reason, one million's. These are accountability aids that are normally invisible because the data is normally just in the possession of an IT team. Whereas here it's in the possession of a broker whose software we can examine and whose behavior can be tightly limited. So the development team's procedure is to create and publish the source code for the broker. This is just a simple service running in Python or something. They then create and publish a system image to run inside a secure enclave. Microsoft, Google, and Amazon all provide the ability to have virtual servers that run inside enclaves with encrypted RAM. And so this is the key. It's built a simple service in Python that runs inside encrypted RAM. Build a system image that contains the broker and publish that. That can be examined by third parties along with the code in the broker. Put the image hash into the app build configuration and then build and release the app. So the app, because of the way the trusted processing module works, the app can now put itself in the same situation that Netflix is in with respect to your phone or your PC. The app connects to the broker and demands proof that the broker is running the code whose system image has a particular hash. And if the TPM can't provide that proof or if the proof turns up in its own side, then the app does not provide the registration information in the first place. So this whole thing does depend on the app. You are still cooperating with the dev team. There's a whole question about whether the code you're running and the code that's published are doing the same thing. And that's a large, interesting problem that I won't get into other than to say that there has been progress on that during the last two years here. But in particular, the broker can be tested directly without relying on the app by anybody with a curl command line utility. This gives rise to a novel disaster recovery procedure. The database must be kept in encrypted RAM, which sounds hard, but it's not because the apps still need to contact a central service daily to get new BlueTrace if anyone identifies anyway. The BlueTrace mechanism works by using a new identifier every 15 minutes. The app doesn't have the means to generate those identifiers. It has to contact the backend once a day to get them. So you could easily put that within the broker and indeed secure the system key in the broker and have it provide the registration data as part of its daily poll. Which means if there's a disaster and the server sort of disappears for some reason, okay, you're blind for 24 hours. But as apps connect to refetch their new ephemeral IDs, they can also replace their contact data. This means that the recovery is automatic in the event of a total loss. This also drastically simplifies the database. There's no persistent storage. You store entirely within the RAM inside a small number of secure enclaves in hosting providers. Therefore, there's no need for key management for persistent storage, which is a whole area of complexity. There's no need for consensus at evolution. There's no need for storage consensus at all. The nature of the data is such that the latest update wins. You don't need to keep track of password change attempts or something. Therefore, the redundancy design is simpler and you don't need DBAs at all. It's not just going to hire them, but they also don't have to do things securely. You just don't have a DBA anymore. It's not a conventional way to architect the application, but a whole category of problems just disappear, security and operational. Likewise, broker code evolution. If you need to make a new version of the broker, great. You just do it and you update the apps. The apps talk to the new broker and initially the broker has no data. That's fine. Within 24 hours, the broker is fully populated as apps have contacted to fetch their ephemeral IDs. They've also updated the contact data. Once all the updates have been taken up, you can then just delete the broker, the old version of the broker, so the data goes away. Likewise, data purge. If a user hasn't collected new ephemeral identifiers for 25 days, then they're already not transmitting because they've got nothing to transmit. Therefore, the broker can delete the data automatically at that point without requiring user action. It's not deleting proximity data at this point. It's deleting the contact and identity data. If it hasn't heard from the phone for 25 days, then fine. Just throw away the registration information. Even if you fail to delete your account before seeking to use the app, your phone dies, it's stolen, or whatever, your data just disappears. This doesn't require special operations by a human being or by an operations team. It can be built in a few lines of code. The broker just throws everything away after 25 days of not hearing from you. It also means that cleanup of persistent copies is never required because, of course, there are no persistent copies. Accountability uses no straightaway if the data is accessed because the broker alerts their phone immediately. If you're getting an alert that says your data has been accessed and then you don't get a phone call or a text from the contact trace is telling you why, then you might reasonably start asking questions. If many people start getting this sort of thing and start asking each other, then perhaps journalists get involved and the alarm is raised. The publishing of aggregate data is actually only an additional protection. It's not the primary protection mechanism. Useful, but not critical. Non-problems, I'm running out of time. There's a problem with the fact that even though the content of the RAM is encrypted, a malicious service provider could monitor the bus lines between the CPU and the RAM and work out which pages of RAM are being accessed. And Moxi Malinck's Biket Signal has demonstrated that in the case of his contact matching for Signal, this presents a threat. And so he's implemented something called oblivious RAM which means that no matter what operations occurred, the patterns that are visible by someone who's hooked up a device to monitor the bus doesn't actually learn anything. Supposing until that risk doesn't apply here. Incidentally, a lot of this idea comes from Malinck's bike's work for Signal. So now, standing problems by a happy irony, I got sick with COVID in the week that I had planned to actually write the prototype. That was 10 days ago or two weeks ago. And therefore, I haven't actually written my prototype. I've studied a lot of code. I've designed in detail. I haven't written a line of code. So that's a fairly important, outstanding problem. Clearly, there are side-channel attacks. Granted, the data is encrypted. The risk is quite small, but there are theoretical risks. Once again, Malinck's bike has addressed these with the use of LFENSE and RETPOLINE. It does mean a change of language. And pretty sure this stuff in Python would be a performance disaster. He's working in C or Rust, perhaps. There remains a problem with the MOH keys being disclosed or used without authorization. But I'd point out a couple of things. One, it's much, much easier to protect a key than it is to protect an entire database. In fact, you often have a hardware security module within the MOH environment that measures health for that purpose. And two, of course, abuse would be detected. It's the same thing. If an adversary gets control of the key or the security module that contains the key and begins extracting lots of data from the broker, then all of the affected people will know about it in real time and will start to make noise about it. And the intruder will know this. So this also tends to disidentify as intruding behavior in the first place. So it's not a perfect solution, but it's a much smaller risk than it looks. My only real next step is to implement a prototype. And that's, in fact, the end of my presentation. Yeah, there's always echo coming from me when I talk. I don't know why I'm sorry about that. I'll have a quick look for questions. There are no questions. How appalling. Really, I've explained the idea so thoroughly that no one has any questions at all. Well, we have about six minutes, I think, before Thomas has to come on. If anyone does have some questions, do feel free to ask. Yes, it is a bit philosophical, but it's a, and perhaps I should point out a piece of the background, which was that I had, firstly, the GovTech team that built trust together, it was headed by a formal privacy regulator. So the idea of excluding the ID card information, indeed initially excluding phone numbers, was driven by the team. They were not in the category of the Scandinavian country that was going gung ho for total informational awareness. And so I pointed out that there was a way to reduce still further the exposure of this database. And he indicated this was an interesting idea. So it guides me to at least make it concrete and propose it. I don't anticipate that they'll implement it anytime soon, because of course they're winding it down. They're preparing to decommission it. But there is an expectation that Trace Together will be used in future epidemics, or indeed in a subsequent wave of this one. We've seen from all the epidemics of the 20th century, it's two years of medical and political struggle, a third year of relative quiet, because everything's under control, and then the fourth winter, infections go crazy, and in two or three of the epidemics of the 20th century, more than half the deaths were in the fourth year. So Trace Together may well be, indeed hopefully is decommissioned quite soon, but it's not impossible that it will be re-implemented sometime in the next couple of years. But at any event, there will be other epidemics. And so I suspect that the team will be receptive to a technological means to still further reduce their risk. That's not to say they'll actually do it. There's always priorities and costs, but I will certainly be putting it forward, and it seems it'll get a reasonable reception. And so our point is that you don't need a secure enclave. Fair enough. I certainly haven't suggested that the secure enclave is the only way to go. I merely point out that it has certain additional strengths. But that's a fair point. I would give some thought to Monero's approach. Further comments or questions, please put them forward. In any event, we are looking for Thomas and Sankalpa to hopefully join us in a few minutes. Thomas and Sankalpa, are you in the room? You don't seem to be. All right. I will introduce them on spec and hopefully they will arrive, or maybe it's recorded. Oh, I beg your pardon, it's recorded. But they will be in the room. So Thomas Steenbergen and Sankalpa Menon. Thomas is the head of Open Source Adhere Technologies, which is an open location platform company enabling people, enterprises and cities to harness the power of location, steering member of the European chapter of the to-do group, and co-founder of the Open Chain Reference Tooling. Here he is. We have a formatting problem. And so Sankalpa is the lead software engineer at Adhere Technologies with almost a decade's experience in open source compliance. In a couple of years, his embedded developer has worked with MNCs, his experience in open source software compliance process tools and team management, and a post-grad in science. Okay. They are going to talk to us about improving the security and licensing of your software using the OSS Review Toolkit. So give me a moment to pop in the link. Okay, here we go. Well, I'm going to be improving the security and licensing of your software using OSS Review Toolkit. My name is Sankalpa Menon, and doing this talk together with Thomas Steenbergen. In this presentation, we will show you how we answer the question, which software do you rely on? This question is important as we see more and more supply chain attacks that exploit high impact vulnerabilities. Besides supply chain attacks, we also see sustainability issues, where a single person is the maintainer for a package used by thousands of others. One of the solutions we adopted is to create software bill of material for our software, which you can think of as an ingredient list on your Coca-Cola bottle. Software bill of materials are awesome. As an ingredient list for your software, they provide you with a lot of things you need for compliance. They can also help you draft unified approaches to security in your software supply chain. And if you're buying software, they can help you know which vulnerabilities are in the software that you're right to buy. That said, it may be difficult to draft an S-bomb, as software is often complex and not all the package methods available. You might also see several false alarms for vulnerabilities if we're included of source dependencies that you may not even be using. It may also be very time intensive to trace down all the components and to train your developers. Also, opponents of S-bombs have raised questions about the effectiveness of S-bombs in preventing cybersecurity attacks. That said, we believe S-bombs are a step towards transparency and widgets and can help anyone identify and address risks. One of the tools that you can use to generate the second DX or S-BX S-bomb for your software is OZF Toolkit, or ORT for short, which is an open source project on the Linux foundation. Besides being able to generate S-bombs, it has many other useful features, which is why we use it and contribute to it. For example, it has a very powerful policy engine that allows you to write licensing security or engineering standards checks on your software. Say for example, if there's a particular license that you don't like to see included in your software, you can have ORT flagged for you. You can also use it to check for security vulnerabilities. Or to comply with your license obligations, it has built-in support for multiple scanners that you can use to scan the source code of your project or its dependencies for copyrights and licenses. One of those obligations may be that you have to provide a source code bundle. Well, you can use ORT to generate one for you. Overall, ORT has a modular design that makes it easy to integrate and customize it to your needs. Let's now see ORT in action with a little demo. In this demo, we'll show you how you can mainly trigger an OSS review toolkit scan in GitLab and how you can integrate it into a GitLab pipeline file so that whenever you make a code chain through your project, when ORT scan will get executed. For those who are new to OSS review toolkit, this is the main code repository. The readme of the project shows what does the various ORT components do and how you can install it on your machine. To use ORT in GitLab, we recommend you to use this repository. Following the ORT for GitLab installation instructions, we mirror the GitHub repository in our GitLab account. And it looks like this. For this demo, we are going to scan the folk of my type repository. Once you have ORT for GitLab setup in your local mirror, click Pipelines under CICD and click the button Run Pipeline. Once you click it, you would see a page like this. To save some time for this demo, I already prefilled the values. To do an ORT scan, you have to fill first five values. Others are optional. The first value is software name, software version, code repository type, code repository URL and code revision that you would like to scan. Once the values are filled in, scroll all the way down and click on Run Pipeline. I already did this and here you see what you get when your scan is completed. To see the scan results, click on the ORT scan. Here you see the logs of the scan and scan results can be found under job artifacts. And here you would see the various reports generated by ORT scan. For example, the web app report. This report contains all the necessary SBOM information. The first tab contains the license and the package information, the revision and all the files that have been scanned. The second tab contains the packages, the version and the license information. You can view it in the Tree tab as well. And when you expand it, it would look like this. If you like ORT db executed whenever you make a code change to your GitLab code repository, simply add ORT to your GitLab CIMO file. Add the include statement as shown here. Be sure to update the value for project to the location where you have mirrored the ORT for GitLab code repository in your GitLab account. Next, add an ORT scan job as shown here. In this case, we've configured the ORT scan to be executed in the test stage and this pipeline defines three stages, build, test and lint. Next, we've configured it to do two retries. This is a generic option that GitLab has for every old job. We do this so basically in case there is a network issue when trying to retrieve a code repository for a dependency, it will do another run of ORT scan. So this will help us hopefully to get more successful pipelines. Next, we'll define some variables. You can actually omit these and ORT for GitLab will use default values. But in this case, we added them for demo purposes just to show what you can do. So here we set the software name for the project. In case this is a useful option, in case the software name of the project is different than what is defined in the code. Next, we can define the software version. And here we define the code repository to be scanned. Finally, we set the option that ORT has to allow dynamic versions to true. Basically, what it does is it allows scanning of projects even if the package manager requires a log file and the log file is missing. Next, we set the license scanning reports. For people familiar with GitLab, this is exactly the same how GitLab's own license scanning mechanism works. But in this case, instead of using GitLab's built-in license scanner we're basically using the results from an ORT scan. So let's look how such a pipeline looks like when it's executed. So here you see it, the build test. So dependencies get installed, then they can test it and here you see an ORT scan being executed. Next, you can open this file by right clicking it. You'll see the logs and for the results you can click the browse button and then click on ORT results and then you see the various results files. Now this might seem like a lot of clicks, but it is. If you run ORT for GitLab in a merge request actually it will automatically add a code command to the top of the merge request window and then you have a simple link directly to the ORT's results. Now we will show you how you can use OSSV Toolkit to generate a Cyclone DX or SPDX S-Bomb in GitLab. ORT for GitLab generates by default various reports including Cyclone DX and SPDX-Bomb for your software. Here it can be found bomb.cyclone-dx.xml bomb.spdx.json and bomb.spdx.yaml The report bomb.cyclone-dx.xml looks like this and the report bomb.spdx.json looks like this. Thank you for listening to our talk. If you have any questions or interest in contributing feel free to chat with us on the ORT Slack channel or reach out to us directly. One of the hottest topics is supply chain security and S-Bombs. I've been working on S-Bombs for quite a while so it's really funny things that we recognized years ago is now all of a sudden like, oh, we need to fix this and I'm like, you can't fix this because enterprises are using open source from the community. Community doesn't have the tools if they wanted to tell about open source events in there if the open source community would have those tools and would be able to produce basically what is in there they will likely get more adoption by enterprises and thereby more success but you see it's like a chicken and egg problem and they said we need to break this so this freedom motivates me there was a hard problem which was kind of neglected the solutions over there were basically built for lawyers and then we were like, okay, let's stop I can tell you, we originally thought this was a two-year thingy where we were like, ah, we can just build this yeah, we are now like what's... and we started in 2016, so six years in I can tell you the interaction between basically business, technology and legal and it was a very complicated one so software developers and I am my software developer myself for origin we think in binary true and false but lawyers it's like there's a zero and there's a one and there's lots of decimals in between that's how we have to think about it it's a thing of Ming Ming spoke about his city law language which is for representing contracts and regulations and so his introduction to this was a specific court case he was involved in where he as an engineer was with the lawyers trying to work through all of the contingencies and because we think in terms of threats and vulnerabilities and so forth and just kept again and again and again hitting it depends what the judge says no, that's not... we can't engineer that the different worlds, I work in privacy I have absolutely the same thing but the way you think to deal with privacy risk is very different to the way you think to deal with access control for example but yeah I think it's a six year overnight success we're still not done so what basically has happened basically on the enterprise side we were unhappy fine enough we stumbled over a problem other open source problems offices so this tool is basically it's not that we're selling it this is basically open source offices it's also a lot of other people in the community that basically are scratching their own itch and basically we need something for this there's nothing on the market let's just build something together but I said the tricky thing is it's very complicated and at the beginning people were like oh we're just going to do but most things do a little project and then they realize how complicated this is and they stop this is why they find lots of projects that die in this space fortunately enough well again we have persevered and we have lots of support again I was talking about the lawyers there are lots of lawyers that are supporting us we have lots of the foundations have been helping us so we moved in 2019 to the Lynx Foundation but also the Eclipse Foundation has been great helping us so again it takes some perseverance and now we're trying to say like hey we're not yet there yet we're still building all the things we need to be supported in the end what we want to do with this tool is kind of build it kind of like I don't know people familiar with video learn or VLC media player it's like a media player where you can just install it on any platform you can just talk any file edit and it will just play it we're hoping that eventually we can get to or to the same the same space that you can just take any projects and just toss it in there I'll give people the link it's it's worth pointing out that the lead developer for VLC is in the room I love it I really make love to shout out I've got some other questions to video learn so I it's one of my favorite players and that was kind of also my inspiration because I think that project started at a French university and I think also kind of got out of hand it's an amazing it's an amazing tool expected VLC to become his career what are your motivations in your work what brings you to this so the motivation for me is is the freedom that opens those gifts so I personally is a fan of learning new things learning new capabilities for my career so it gives a lot of opportunity and to contribute to learn new things learn and implement it at the same time so yeah that's my motivation yeah so it's quite funny it's like I basically was hiring and I said I found Sankalpa via basically my open source network so it really so like I I haven't again I work with people from Microsoft and GitHub basically on the Seattle side US side so basically like the Toyota in Japan or the LGs or the Samsung in Korea and the world that I'm part of like the license compliance things is actually kind of it's really global and it's actually a relatively small world because there's only a few tens of people that are nuts enough to download it and yeah so it's funny how you learn people you meet people and then you figure out like actually we have the same challenges can we not work together so it just by coincidence I met a friend of Sankalpa at a conference we got to chat off and he pinged me on LinkedIn years later to basically like Thomas I like what you're doing can we chat and that's basically got the ball rolling and they're like oh yeah I know Sankalpa as well and so you see we're it's a big world and there's lots of nasty things about this world but also lots of good things that happen where people still are meeting and being able to do great stuff and make this world a little bit better basically and it's intriguing that the you know there are overlaps in our motivations as well I just hearing Sankalpa's response was sort of thought back to my own initial exposure which was right about the time Linux was created let's not get into when that was the fact that at the time the only options for in open source were 3DS 3DSX BSD which gave rise to BSD 3DSX and 4DSD and 3DSD and Linux and they both happened at about the same time but prior to those occurring if you wanted to use Unix which I love using it's a entirely different animal to docile windows you basically had to be working at a corporation or university that could afford an AT&T license those were not cheap licenses I don't think there were $100,000 but they certainly weren't one they were sort of tens of and so it was a world changing event that Jots and Torvalds in different ways brought their sort of free of charge open source operating systems for running on PCs into the world about the same time and that suddenly meant yeah I could do stuff that was otherwise impossible so I interestingly hear more or less similar answers to motivation and it's not funny because we now kind of created a platform a layer that was missing and now I see other initiatives that are starting to build on top of this so they are basically like oh hang on finally we can do other things because hey we now have an open source project that does that and we're like oh we never have vision that's okay yeah technically you could do that but that was not how we designed it but this is the fun thing about open source you never know how the thing will develop that you put in the world of what people will do with it that the people building on top is a strong signal that you're doing something worthwhile anyway I think it's time to bring our next speaker on board and go through the same technical dance I just did with yourselves thank you both so much I don't think we had any other questions at this point you're welcome I think you've made friends for us evening thank you for doing that for awesome BLCs so many thanks please stay for the rest of the security and privacy track there are other interesting speakers right thank you Harish thank you guys Harish would you like to turn your camera on and unmute your microphone when you're ready hi Roland this is Harish okay you put your microphone camera you're doing that let me we have the shortest bio in the entire program engineer, software, hardware, public policy technology, ethics in tech usage first foreign and free software in 1986 it's a very concise bio Harish is modest he is along with me part of the council the committee for the Singapore chapter of the internet society he's a tech lead for Red Hat in Asia he's a fellow member of the Singapore Amateur Radio Transmitters Society of course not in the name on it a variety of other involvements in software just checking we're starting at the bottom of the hour right yeah I just how are you going with your camera it's not picking up my virtual camera so that's why I thought I should record carry on thank you Niko let me grab that question to dump it into the archive okay so while we wait for Harish to battle with the technology corporate who cares about licensing usually using proprietary paid software I meant to mention it actually while they were on screen there's a couple of situations where licensing matters for open source and a really big one is a ferro so any SaaS vendor will have a reason for not using software under a ferro license and I routinely see in contracts and security reviews from customers requirements that we as a service, this is my day job talking about, we as a service provider assert that we do not use anything under a ferro license so there's even with a very large open source environment there are situations where you've got customers who are themselves corporations who will impose limits on you about the use of open source software that creates under licenses that create problems so it's legitimate some very real questions to answer I've unlocked Harish I'll make Harish the presenter because I don't need to present anything anything else that should be enough so Harish you're unlocked and you are a presenter the room awaits there you go we can see you and how are you going to present screen chair? no, I'm going to the video just video in that case I'm on OBS so it's doing everything from there in that case we'll make a loud change we have a loud specifically for that purpose ta-da alright, it has passed 30 minutes past the hour so the floor is yours take it away thank you very much, thank you Roland and thank you the team at Foss Asia it's always a delight to be here virtually this last couple of years but hopefully next year we'll be in person as we all are hoping for all this well so today I'm going to be talking about a piece of technology which I think deserves a lot more awareness and roll out and usage and in something called WireGuard and let me walk you through what this is really all about and so that you can have a sense of where I'm coming from so the title of this talk was securing the world with WireGuard that's a very noble statement to make because the world has been secured in many different ways but different technologies and so this is yet another one and you may be wondering why on earth do we need yet another one but let me walk you through the thinking behind this and I feel that you understand where I'm coming from as far as this technology is concerned so at the start let me just point out a bit of a history it's an incomplete history of VPNs because what I want to do is to showcase the fact that we have a fairly long history of technology in the VPN space but the length of the technology in terms of the histories it goes back to about 1990 1991 and the very first one was something called SWIP done by a person from Columbia, John Yonidis and a bunch of people from AT&T you know one of the things that I want to make a site remark here is that when someone works in an open source project the person's name gets associated with it and you come from a corporation it's nameless, which is very sad because you know it is technology that something spent time building it but never mind so in 1993 that was done it was not really a production version product by anything of that nature in 1994 something else was created called ITSEC and it was done by a gentleman by the name of Wei Chu he was with the US Naval Academy and he also was adopted by the National Research Foundation and the White House to create something from a security point of view and he was the one who was the prime lead for ITSEC following that the subsequent years of the different projects as you can see there's a whole long list of projects that have been going through motion like for example an eye swan which was essentially built by John Gilmore in the early days and that project kind of like forked out into multiple different projects open swan Libre swan, strong swan and so on but they're all based on the ITSEC specs which was designed and crafted by Wei Chu in 1994 in 1997 there was another project called TIN-C or TIN whichever way you want to pronounce it it's based on SSL and TLS not TLC there should be the type of that it is actually an interesting project I must say I mean I have always wanted to try and deploy that for my purposes but I never quite got around to doing it well enough and I think it is some nice ideas behind how you do mesh networking and so on so I'm kind of keen to explore that in the future S-Tunnel, PPTP, L2TP OpenVPN these all came after all of these and OpenVPN seems to be kind of one of the more widespread VPN technology being used around the world for all kinds of stuff I mean for example as you can see I'm from Red Hat and in Red Hat our VPN access is via OpenVPN so it should be but it's built on OpenSSL with authentication, encryption and all these things are built in and you can either do TCP or UDP and so on in 2012 there was another project that was initiated called ShadowSox this was primarily built so that the users can then bypass the GFWC I'm not going to expand what that means but those of you who know will understand what I'm trying to say it is an interesting project and I haven't had a need to use it myself but I think that's a valid use case scenario if you want to bypass certain kinds of restrictions so with all these VPNs out there do these work as advertised? does it solve the problem of a virtual network that you can connect to securely and no one can sniff your traffic and then try and decode what it is and so on and so forth the answer is yes, you can and it works extremely well definitely, I mean if it didn't work I mean all these outputs out there and all the different VPN solutions out there including the proprietary ones would not have been a rollout in the manager's rollout now the challenge with all these existing VPN tools is that setting it up is the hard part and getting it right on the first try in other words from an administrator's perspective it is actually not a trivial thing to do it's very there's a lot of bells and whistles and tweaks and options and so on to consider to get something going correctly some of these VPN solutions when it fails it fails to block because fail to open means forget about it I mean you did wrong so you should fail to block when it fails it should be blocking everything but the challenge is as an administrator this is a lot of work and you need to have certain kinds of skill sets you need to go for the necessary training so you understand what the nodes are but from a consumer perspective that part has been essentially taken care of so it is a two part story you have the server side the administrator setting it up and then the user using it but all of these tools comprise hundreds of thousands of lines of code and some of these projects do not we're not even counting the libraries that they require to use so if you discount the libraries the projects themselves are hundreds of thousands of lines of code you try and read the code for something from a security point of view that is that long and that is such a huge corpus of code it's a non-fragile exercise it's a really very difficult thing to do doable but it's difficult to do now meanwhile while all of these VPNs were being set up in 1995 that's 27 years ago for those of you who want to keep in count and in this place for Finland Finland is where Linux came from but it is not liners in this particular in the case this is a gentleman by the name of Tatu Jounen who introduced SSH I remember you first using SSH at the apricot conference here in Singapore in 1996 where we were providing a terminal room for people who were participating to check their email and all that that was available then and put in SSH and I thought this was just a brilliant piece of technology but today SSH is what you would do to securely and trivially to access your systems from anywhere so if I already have SSH to access my systems and so on why do I need a VPN or is SSH a special type of VPN so quite a few of you on this attending this conference and around the world also would be you know when you were setting up a server for that matter or a laptop whichever system you're setting up your Raspberry Pi and so on one of the things that you would normally do is to enable remote access via SSH whether it is through port 22 or some other port that you define you would do that because you want to be able to get to the system remotely at some point in time and so the best way to do that is to set it up such that there is remote access capability and SSH is the way to do it why won't you then set up a VPN to do the same thing instead of SSH because both give you the same capability remote access in a secure fashion why not VPN or any of the VPNs that I listed earlier instead you're going with SSH why would that be it's because of simplicity because all the other VPN technologies are well designed very complex all kinds of bells and whistles all kinds of capabilities are trying to address and offer they are not trivial they are not simple enough to deploy as an end user kind of thing so simplicity is a reason why you use SSH today and another one that is increasingly important and the speakers today and yesterday were speaking about auditability of code making sure your supply chain of code your software build of material it's known what it is how do I then audit this code that you're going to be deploying wherever you're going to be deploying the smaller the footprint of the code the bulk of the code the smaller it is the fewer lines there are the better it is for you to assess and understand what needs to be done from an auditability perspective and so on and so forth auditability is a very important thing especially so when you're dealing with things like VPNs you need to be able to audit the code not you yourself as an individual you could somebody to provide the service that audits the code and then publishes the result and say this is the level of our audit and these parts of the code has successfully completed it these fail and so on or wherever it is to manage that I do ask the ask the question how many of the proprietary software vendors who provide VPNs have the code audited I don't know it's a question I'm asking if you know it please the chat if you know but I don't think anyone has they may claim but I wouldn't take the trust of that one now let me then switch a little bit if you have had to set up HTTPS site you know the secure sites of your website for example you know it has was a very very painful process it was a lot of work you have to do there's money you have to pay you have to get this certificate done you have to do that signing and so on and so on and so on so much so that you know having HTTPS site was kind of a privilege because it does cost money and also it takes a lot of effort but what has happened here is in one fell swoop what happened is let's encrypt completely simplify that set up and today it is a no brainer because it is so easy to do it's essentially for all ends and purposes one click away and you get your SSL certificate done your web server set up correctly and off you go and today if I recall correctly looking at the statistics before let's encrypt was even available on the average there was about 25 to 30% of websites or on the internet were with a certificate a science SSL certificate today it is close to 95% and the bulk of it is coming from let's encrypt which is a phenomenal impact on safety and security on the internet and I was just visiting the website this morning and encrypt says they have 260 million sites that are using their certificates that's just mind blowing and it's just a fantastic project that needs to be lauded and if you can it's a picture if you can please support them by donating to the project it's an extremely worthwhile project so what would you say if you could do the same thing same kind of it's setting up of a VPN but with the ease of how you can set up let's encrypt for your HTTPS services of your website if you could do that what would it be, would you want to do that because once you can do that kind of stuff for a VPN then by default everybody gets to be on a secure network all the time it's not when you select it to be so here I offer Jason Donanfields Viagard he's the author of Viagard I put those to him, I think he's done a fantastic job if you want to learn more about how he came about to do this and understand his thinking behind it I would highly encourage you to watch a lot of the presentation he has done he's an extremely passionate speaker and it's got good information to share and so on so I would highly encourage you to do so but again why Viagard why not any of the others that we saw earlier yeah, I think that you asked so consider this this is actually a screenshot two screenshots from a speech a talk that Jason did at Black Hat in 2018 I just took two screenshots which I thought was interesting in telling he was highlighting the fact that in OpenVPN the short on the left OpenVPN has approximately 120,000 lines of code and then you have to add OpenSSL to it in order to figure out and make OpenVPN work Strong Swan requires 405 lines of code almost 406,000 lines of code plus XFRM and XFRM itself is 119,000 lines of code SoftEater is 329 lines of code 1,000 lines of code and on this day at Black Hat 2018 he was saying that Viagard is just about 3,700 lines of code I think today maybe we plus and minus is about 4,000 lines of code that's two factors two factors of size lower than any of these guys this is in the thousands range these are in the hundreds of thousands range it's a huge difference and he put in another slide as well that has got three circles three large circles and a tiny little circle this is like looking at the planets right so you have Jupiter and Mars and Venus and then you have poor little earth and the corner there, a tiny little guy and if you look at it from a different perspective using a different terminology the area of exposure is very large if you look at the IPsec components is almost 4,000 20,000 lines of code SoftEater is almost 330,000 lines of code OpenVPN and then you have Viagard which is tiny relatively speaking two orders of magnitude smaller this is the URL that you can go to and watch it but you can just search for Donald Fields Black Hat 4,000 2018 that would be good enough so Viagard Viagain it's the simplicity of SSH what I mean by the simplicity of SSH it uses the same idea of an SSH where you have a public key and a private key and then when you make a connection the key exchange happens it's secured and off you go nothing else to do no secondary checking with an LDAP server you can but you don't have to and so on and so forth so there's a whole bunch of things that gives SSH provides and has proven to work for all of us why can't we do the same thing for VPNs Viagard is based on UDP and that is very important because UDP essentially is connectionless so you're not running KCP from that perspective and so you can be in a stealth mode constantly nobody knows that you're in the network at all think about it that's actually quite interesting and there's a backstory to this and I will leave you before for you to go and read about why Jason did build Viagard the initial project that was working on or something different but why he did that you'll find out very soon when you listen to his talks every node in the Viagard network is a peer so I'm a peer to another peer and we are all peers there isn't a server so to speak like in SSH environment there is a server that you connect to as a client and there will be another server you cannot do as another client and so what happens in that scenario is that you have a server-client relationship in this particular instance in the case of Viagard everybody is a peer but but you might and want to sometimes assign a peer to be a controlling server or controlling peer or controlling server because sometimes what you want to do is to be able to have one central entity that you control to be able to dish out all the Viagard configurations and so on to all the different layers the design again of Viagard is not to get into that part of the story in others the distribution of keys distribution of access control not to do that and just keep it very simple to the point let other people add stuff on top of it so that's what I'm going to be hoping to show you in the talk in a short while and as of the Linux kernel version 5.10 Viagard was built into the kernel already so you today have the ability to do built-in VPN set-ups in the Linux kernel and because it's such a good idea there's a lot of people who have started on top of it they now have made user space implementations of Viagard for all the other operations for the Mac for the other the windows of the world and so on and even including mobile phones so on an Android phone you have because it's all inside the kernel already you can actually easily set up and connect to another Viagard network truly you can also of course run it on Raspberry Pi you can run your server on a Raspberry Pi and then use that as a way to dish out different connections to everybody that you want to get into so what I'll do now is to do a quick demo and I know demos are a bad thing to do but I'm not actually doing a demo per se but I'm going to show you something regarding how I set up and a system that's already running with the let me check the right one here as obviously I'm getting the wrong okay never mind okay what I will do now is I'm going to just let me just get to that alright so I am going to login to a system and this is I will explain what the system is um let me just here okay this is a virtual machine that is running Debian Debian 11 the build side version and what I have done is to set up using the Pi VPN scripts the reason I wanted to show a Debian installation of Pi VPN scripts is because Pi VPN is targeted at creating a open VPN plus a wire guard VPN on a Raspberry Pi but it's a bunch of scripts and it works pretty much anywhere so I wanted to show that you can do the same thing with a Debian installation so this Debian that is running on this computer that I have is in a virtual machine sitting on a CentOS as I note here it's a CentOS 8 stream 8 system which is running on an Intel look server and so this machine the first VM is a Debian 11 and it's running Pi VPN which is scripts from Pi VPN.io it's actually very straightforward I was very impressed at how quickly I could get the thing moving and set up and so let me just show what is currently running on that system let me see the command that you would use is a command line tool so bear with me on this Pi VPN it lists you all the capable stuff you want to do that with this particular tool so I have already set it up previously so I'm just going to say I want to list all the users of this particular VPN so I just tell me these are all the various user names the client names and the public key of each of the clients and when was it created and so on and also you can I can for example right now provision a brand new user I can add a new user as a Pi VPN A and I just let's call it Force Asia and now it's generated a key this is it what's the next thing I have to do I have to provide the configuration information to whoever it is that I'm going to be sharing this out to one way to do it and you are going to be able to do it right now if you want to but you know I hope that you won't abuse the system by using this let me just do this Pi VPN minus QR I'm going to generate the QR code for this particular client that I've just set up so it says attacking Pi VPN minus QR and it's number 12 Force Asia and a QR code shows up let's make it a bit smaller so this is the client for if you want to scan this in using a tool on your mobile phone you should be able to then connect to this server and then connect to my server and then from there get up to the internet so I am basically in this particular set up what I've done is incoming requests from the internet comes into this virtual machine that is sitting on a server and then traffic then automatically is sent out to the internet itself so this way anyone anywhere with the connect to this VPN server will seem to be coming out of the Singapore IP there you go job done see how easy I can just do this so this is to me a phenomenon it just blew my mind when I saw that being able to be done the other one that I want to just I think in interest of time I'm not going to show the Fedora 35 one which is on the second VM but what I would like to do is to show how I have connected to this particular device this particular Pi VPN server so let me just get into bringing the next guy online whether you might want to address questions instead okay let me just okay that's fine I will skip this part because this is just to show how I can use the phone to do the same thing but suffice to say I think we will have more than happy to show that later let me just move on from here one more thing that I would just want to talk about is there are a few considerations for you why did I choose to go down this path every wire got a node is a peer and a peer-to-peer network is what I'm interested in but it's sometimes difficult to set up all of these things and a peer-to-peer basis so you need some kind of a coordination server the coordination server I would prefer to run it myself instead of running it if you run by somebody else and Sophia's own Pi VPN helped me to achieve that there is another tool a service provider called Tailscare which does a phenomenal job I mean I could also then they've done a fantastic job the challenge I have with that is that while their end client code is open source their coordination server or the server that does all this hand shaking is proprietary and it's not open and it's not yet so I'm one of the person who prepared to pay for a service if I know that if at some point the service disappears I at least can run the service for myself the last one number five is zero tier is another solution which works but it doesn't implement wire guard and it does it in a slightly different way it's actually a very clever solution and I do use it as well for some of my installations but these are some of the considerations you have and with that I say thank you very much and find the questions after this thank you cool thank you Harish we're now a bit tight for time so if you don't mind I'll get tutorial started tutorial you should be able to see a button at the bottom of your screen it looks like a pair of headphones if you click on it there should be an option to enable your microphone while he's doing that hopefully cool questions so I think you actually answered the first one from Nico which are the VPN solutions are there that are easy to set up and open source apart from SSH and aha the answer is wire guard so let's move on I have a particular question about access brokers versus VPNs rather than about wire guard specifically but when you first yield to somebody else from Vyve is wire guard better than other VPNs I feel you sort of answered that well you know it's not a better or worse they are all good enough my question here is like I said it's a matter of the administrator's perspective from a set up point of view it's not a consumer end of point perspective it's from the consumer perspective what can you do easy easy for me to set it up and I feel wire guard removes all the barriers like how let's encrypt us for setting up actually HDTPS there's another thing you need to think about as well when a business says hey yeah you know connect with me I don't have lots of the VPN connections unless you can verify it for yourself you have to take that word for it it's as basic as that so I would say that you know yeah even though today we are nobody can look at the logs of our VPN connections coming from you as a customer yeah you don't take it with your picture so that I think it's more of a matter I had the impression there's a reason to believe that a large number of the the VPN providers you'll find are in fact funded and operated by organizations with nefarious intent I think it's beyond just the risk yeah we won't know those things until something happens so that's the thing but it's wrong with the map, there's certainly evidence floating around that a couple of dozen of the more popular VPN providers are in fact run by one particular government that's true and so it's it's not taking a bit of a risk, in fact you are almost certainly shooting yourself in the foot with most of them yeah so my concern, Victoria is she can hear me, do please turn your camera on my concern with the the concept of the VPNs in the first place, aha success now Victoria your other problem is going to be the microphone towards bottom of the window you should see a headphone button if you click on it you'll get an option to enable your microphone oh I think you might have succeeded can you think alright keep going, you need it again I'll ask Harish my question though, it's this there's a critique, this is largely from the Zero Trust networking crowd that VPNs expose too much attack surface they're really indiscriminate so yes they have default closed level of the public side but once you're inside, you're in this sort of hard shell, soft delicious innards security model I know that the access brokers are not yet anywhere near as mature have you given us any thought or something you encountered yeah I think the problem is not easy there's no easy solution to any of this that's really honestly, there isn't one you're aware of I mean the question is to put those to you for raising it because it's a question everybody has to ask as many times as possible to get a sense of where your exposures are so sometimes you are willing to take a certain exposure and other times you know this is not acceptable and then you have to mitigate for that and so that's really not knowing and going into blindly is the problem absolutely always worth being aware Harish, thank you Zero I will now move on and I'll introduce Victoria briefly and then ask him to carry on why I got over packet radio why not why not let's do that but not tonight Victoria born before engineer Diotail deals with the internet in all of its aspects technical, business, social, political entrepreneur, writer, activist and developer currently works for open exchange makers of Dovcott and PowerDNS and a global leader in the free software applications I'm sure there's one other highlight I wanted to talk about in this way is here the coordinates in community activity showing up in the European Parliament the ITF to argue for an open safe and decentralized internet that's sort of why you're here to talk about digital sovereignty and platform regulation in Europe but the key relevance to false age of course not being in Europe is that legislators and regulators around the world tend to take a steer from European legislators and regulators and so I'm interested to hear sort of where you're at I know that there's been recent progress on DMA but the primary interest for this audience is probably the tail end is and therefore what it means for or might mean for Asia is so that's what I'll be paying keen attention for in the meantime thank you for joining us please take it away thank you I'm sorry for I'm normally I'm based in Italy but today I am in Washington DC so I have I hope the connection is fine can you see my slides I think I thank you my apologies I have to make a layout change because you have slides in the usual place one moment just applauded them sorry okay there we go so thank you for the introduction yes that the I mean this is a presentation that I have been giving sometimes in Europe especially at FOSDM but I really wanted to share it with an Asian audience because of course this could be the example of what would happen in several Asian countries and other countries of the world in terms of regulation for the digital platforms and so I will briefly recap but first of all why is Europe acting so what's the problem that Europe is trying to address and then how which are the items are coming and then maybe we can have a discussion and see if we have questions so well who am I you already introduced me and working for an open source company where possibly people know Davcott which is our best known product and the most widely used IMAPS server in the world or PowerDNS so let's get into what they call Hotel California Hotel California is basically the current situation in most of online digital services and devices in which we at least I mean with the exception of China of course but in the rest of the world especially in Europe we are mostly stuck with the services offered by very few companies and so this is the scenario in financial terms I mean this is what I now updated it but basically it never happened before in the history of the world that we had five tech companies being the five biggest ones now there's been some change I mean Facebook has become meta has lost some value but I mean the biggest of these companies now is Apple their market cap is almost three billion dollars three trillion dollars which is basically the Francis GDP so even in European terms some of these companies are becoming as big as the biggest Europeans countries and so this is this was a contraction because they this implies a lot of power so for example in the in the smartphone market I mean there's been a lot of consolidation and nowadays if you get the smartphone basically it's by the operating system is made by one of the two companies you just you can just choose between two and even the apps as you see for example the social and messaging apps are very consolidated into the hands of very basically a single company and the same with the cloud I mean the move to the cloud has also been consolidating the internet and the market for internet services a lot looking well we have some Chinese companies in this space but again the biggest ones are the American ones and together they basically have most of the market and it's getting of course the market is getting more and more consolidated and and so the same for I mean for revenues one of the issues that Europe is trying to address is taxation there is the perception that these companies do not pay enough taxes in the place where they make business in European countries and in fact I mean this was analysis that was made on Google revenues but you could do the same with other companies and these companies tend to invoice your revenues from a different country and then move the basically the biggest chunk of taxes into international places sometimes tax havens sometimes simply their home country and so what gets taxed in local countries is really a limited amount and all governments of course in Europe are absolutely not happy about this and then there's more I mean you do see that this has an effect in terms of wealth transfer so there is a general concern that the digital market is now basically a way to transform wealth out of Europe into well in this case the US but could be China but in general countries that are outside of Europe and so there is a concern in how can we keep this wealth here keep the wealth local keep the growth local the jobs and make it so that we can as Europeans we can use more and more European services not because we I mean we don't we dislike Americans but but because we need to grow our own internet market and services and so I see when you see this transfer wealth I mean this one of possible anecdotal indicator but you see the value of how this is going up in my 2.5 times in seven years in San Francisco it's really a sign of movement of wealth across the planet so this is what I call I mean hotel California because these are services where you can check out anytime you like but you can never leave so these are services in which you get into the services you start using whatever whatsapp one of the Google services whatever and then it's very hard to switch to another one and you are locked in and there are a number of devices that are used and basically the assessment in Europe is that these are devices that Trump competition that Humber competition and impede a fair chance for European companies to compete for these markets and there are several ways to show this but the example I like to make is the comparison between the original internet services like email or the web and the most current services email for example is a fully compatible interoperable service which means that you can get your email address from and your email service from any provider and you will be able to exchange email with any other user of any other provider anywhere in the world so there's just one email service you can get it from different providers but it's just one global service and this makes it easy for people to offer email services there are I mean the standards are open there are many software implementations and immediately be connected to the rest of the plant with some issues sometimes but in general it's very easy to compete for email services if you want to compete on email services well on the other hand the newer services were not made this way so we have what we call siloed services and instant messaging is a good example because in theory it works exactly like email there's not a lot of difference from separately and even technically between email and instant messaging but the way the market is developed is very different because here in more recent times venture capitalists and dreams to conquer the world were involved I mean these services were developed as wall gardens so if you get a whatsapp account you can only exchange messages with other whatsapp users you cannot exchange messages with telegram users or with skype users, with line users with whatever and so you are logged in as a user to install each and every application and ever account on each and every service if you need to communicate with people in different services which is an inconvenience you don't know which application you are using to talk to whom and you cannot really move because if you move you lose your contacts, if a new someone comes up with a new brilliant application including an open source one and maybe it's better than the previous ones but if you move to it then you don't have anyone to talk with and so this makes it very different very difficult to switch to a different instant messaging application very difficult, it's a big barrier to competition and also in general these are closed deployments so there's no openness sometimes the standards are open but the deployment is closed, you are not allowed to interoperate and there's other tactics that are used also by these companies to establish this domination and they are now addressed by euro and one is bundling I mean this is very typical, you get one service you get the mobile operating system and you already get the apps from the same maker pre-installed and they work better than the others because they have better access to the operating system and they are selected by default and so on and if you want to use one service you have to use the others like in some cases the commerce platforms with Amazon it used to be like that or in general with Apple if you use Apple if you want to distribute an app on Apple you have to use their app store you have to use their payment system and pay a 30% commission so this is where basically the result is it's a system that gives control to these companies sometimes not just to these companies but the authority to control these companies because they are based there as we saw in the Google Huawei case and so this is a concern a geopolitical concern for Europe and in the last few years we've seen activism and by the European Commission to address this situation the last dungeon that I wanted to mention that now I'm seeing is that now these companies are building what I call the intern of other people's things because we are filling our homes with devices phones but then also the IoT devices the smart TVs whatever all these devices now encrypt everything which is a good thing for privacy of course but it means that you as a user don't know what's going on you do not have any way to control the flow of traffic which starts from one of these devices or from one of the apps on your mobile phone and goes directly to the cloud to the server of the maker and so you have to trust these companies they could gather any data on you spy on you if they wanted and there's no way you can check evaluate or stop the server encryption is being used as a form of control to move points of control traditional ones from the ISPs the governments that use to block stuff maybe it's better if they don't do it but on the other hand now control is fully into the hands of the makers of these devices and services and so this is also a big concern now and so in general this is not a problem just about my money in competition there's more than that so the concern about the big platforms is also about other things which are not related to making money and it's really about surveillance and privacy as I was saying it's about political power it's about the national security I have some examples here so in general all these things were for example when President Trump was censored by Twitter Merkel the chancellor of Germany was concerned she made a concern declaration not because she liked Trump but because she was concerned about basically a sitting US president being censored by a private company so this kind of struggle of power is what also concerns the Europeans and there's all the discussion for example about the US cloud act which allows US companies to access the information of private citizens anywhere in the world so this is how Europe has been responding this concept of digital sovereignty has been developing in the last in the last few years the same two or three years is now generally accepted but we are not really completely sure what it means it's still under development there are two main meanings under the idea of digital sovereignty one is more like digital autonomy it's about becoming self-sufficient so that you do not depend on foreign products whatever may happen in the future any kind of geopolitical tension even just one of these companies deciding that they don't want to make business in Europe anymore for whatever reason you must be able to survive so even if tomorrow Google became unavailable Europe cannot stop the economy the society cannot stop so you need to have alternatives so this is mostly about developing local know-how, a local industry economy and being able to be autonomous another part of this concept is really about sovereignty the autonomy one is more German the other one is more French the sovereignty part is about enforcing rules for internet services so being able to basically tell these companies what they have to do a number of public policy relevant issues like content moderation or whatever so of course there is a taxation but in general it's about ensuring that you as a country have control of the use of the internet by your citizens and in your internet market and together these two sides from the European digital sovereignty concept and so where open source is into this I mean open source is actually the subject of a lot of promotion at least in words not always in facts but in Europe in the last few years we've been multiplying calls for adopting open source promoting open source and public procurement and so on and this is because this is really interesting I think it's also interesting for several parts of Asia the open source model is much better and much more fit for Europe than the American or Chinese model a few very big companies Europe does not produce very big companies because Europe is an archipelago of countries we have 27 countries in the European Union, 26 languages and societies and markets yes this is about the European Union because we're talking about European Union regulation so that's the the so basically if you want to have this kind of services in Europe you will get alliances of small companies mostly open source companies often in each country you will have a company and they will exchange services and help and even relatively bigger companies my company is German and has 250 people which for Europe is a pretty big internet company at least in the software but we do when we have to go to other European states we employ local partners and so we have a cooperation with local companies and and so in the end this is why open source is also useful to this concept of digital sovereignty so what we need to get out of this it's what I call regulated openness so we're trying to establish a mechanism of cooperation between the European institutions and the open source world industry so that the open source and the open standards provide the technical building books that are necessary for competing services to these ones and we need the regulation to force them onto them because the problem is that the dominant platforms they have dominant market positions will open up if they are not forced and so this is why we need regulation the regulation in Europe will be aimed at forcing these companies to open up services to competition so not to disadvantage them but to prevent them from unduly advantage and so this is the list of all the regulations many of the regulations that Europe is working on because they are really many many so the first one is the digital services sector which is about content I will come to this in a minute and one, maybe the most important one is the digital market sector I will also come to this a little later but this is about competition but there are more, now there's discussion starting on a data act which means setting rules so that the public data that these companies gather through their platforms can be shared, can be accessed for public policy or objectives and so this will come out there's now a computer chips act because somehow Europe has decided we need at least a bit of computer chip factories in Europe and in Europe but of course as we know most of the chips are now produced in three Asian countries so in any case we want to be able to produce chips locally because during the Covid crisis the car factory stopped because there were no chips and this is not something we want to see again there's a minimum corporate access there's an effort to establish a minimum tax everywhere in Europe so that companies are not forced into moving into specific states that will let them pay few taxes there's EIDAS which EIDAS is the European Identity Project and so we're trying to offer open identities to European citizens there's GAIA-X which is a funny project hard to describe basically it's a sort of private consortium blessed by the European Commission that is trying to establish common data for example common ontologies if you want to cooperate among different companies in the car sector for example we understand each other's data so this is the work that they are doing so there's a lot of work going on and as I was saying I will focus on Digital Services Act which is basically it's the old e-commerce directive all of the world has this kind of laws that establish that intermediaries have limited liability, they are not fully responsible for the content, for example that they get from users and publish because otherwise it would be impossible to have user generated content this concept is being a little eroded especially for the so-called very large online platforms like the book of course there will be strong transparency accountability requirements attempts to ensure that they do not close accounts unduly or not close accounts when it's necessary so this kind of guarantees are on content which is now very important in Europe because there is a lot of sensitivity about political propaganda especially by foreign countries in which there's a war and so there are attempts to make sure that there's a way to control the content which is flowing on which is maybe not the best of free expression but it isn't necessary in this environment so let's come to the Digital Markets Act which is the most recent one actually the text is not final yet, it's still to be approved but the final text was basically agreed at the political level two weeks ago and as open source industry it's a pain to get especially the interoperability requirements so the idea is that Europe will define gatekeeper platforms for example those that have market value over 75 billion euros so basically the names we've seen before and very few others it's going to be like 6, 7, 8 companies mostly American maybe one European and the idea is to establish a sort of traditional antitrust instrument to ensure that there can be competition with these mega services and so there will be a number of services this is a list so basically only these services are covered so if you're doing other services you're not covered but as you see there's the biggest the most common online services including the operating systems and then social media marketplaces, search engines and so on browsers are now covered and so whoever is a gatekeeper these services will be subject to these provisions and this is the kind of stuff that is now prohibited, will be prohibited once this comes into force which will happen possibly in a couple of years when all the process is completed and so for example there will be the attempt to stop the forced diet integration, you will not be able to exchange data between different services you will not be able to require people to buy a second service from you if you have one of the dominant ones so against handling against for example you will be required to let users use different identity systems not just your own and so on but the most interesting ones are the ones on bundling and interoperability which are the key concerns that I mentioned before so the idea is basically well in terms of bundling that I mean we still have to see the final text but this more or less should be there that there will be the prohibition basically to force users to use multiple services so you have USI business users have to be able to use for example Amazon's marketplace without being forced to use Amazon's delivery service and so on so this kind of bundling will be prohibited and in terms of a software when you will install a mobile phone you will have to be asked which search engine you want to use which email application and so on so the idea is that you will favor the choice of different apps by not allowing a default and then there's the interoperability clauses unfortunately we lost the interoperability for social media exactly for the concerns in terms of content control that I mentioned before but we got the interoperability for instant messages so what's up and other dominant messengers will be required to open up interfaces and allow at least third parties to interact with an API and exchange messages and so on and then there's also the same for auxiliary services so stuff like identification payments you will be required, I mean they will be required to let you use different login systems payment systems and so on and this was a great victory it took a lot of effort by the European open source community but apparently we got there we still have to see the final text so sorry why is interoperability so important because interoperability allows competition so if the original internal services are competitive because they are based on interoperability modularity so you can separate different services and you can replace individual blocks and so you can as a company a start as an individual in the open source project you can make a new for example block of identity and then you plug into the in place of the dominant one and this enables competition so this is why we are aiming to get into a system of interoperable apps in which you just have one you pick the one which is best for you so there's competition maybe one which is more privacy friendly and is not spying on you and you're able to communicate with all the other users so thank you and this is the end of the presentation and I'm happy to take comments and whatever. Well thank you we have a few minutes and we have a few questions so I have some that I'd like to add but I'll start with those that Nico has put up first of how far are decentralized services and platforms like Mastodon affected and considered by the current legislations are they just trying to extinguish the fires created by the big players and make it hard for accident of innovation I'm not trying to say that point let me perhaps ask a related because I think it's the right question what are the likely burdens on operators wanting to take advantage of the compelled interop for example is it going to be the case that as with email a small organization or individual can fling up a server and expect to interoperate with Facebook Messenger okay let's let's start from the second one so that we don't know because the technical standards are yet to be defined there was a lot of discussion we tried to get into the load dimension of open standards the ideal scenario would be where all these dominant services adopt the same standard so that you can just have a free software implementation of it and talk with we did a lot of pushback and this company especially started to say that if we enable full interoperability we will lose end-to-end encryption so this will be bad for privacy so basically the idea behind WhatsApp and so is that they have to be dominant because they ensure privacy and this is fun, I mean we know that we have full encryption end-to-end encryption with HTTPS and in an interoperable world but so in the end we will have to see where we get it it's likely that there will be a first stage in which there will just be an API and then this will have some consequences on encryption by the way one of the key people that I work with and behind this campaign was the mathematics element people so we know what we are talking about and same I mean for the other question dimension of mastodon and so on yes of course in mastodon I actually had to try to explain what mastodon is to the European Commission it was not particularly easy but they are smart people so in the end they understand but they are not technical of course so part of this work was actually trying to explain the mathematics of mastodon to political policymakers including members of the European Parliament and so in the end of course that's the alternative we would hope, I mean if we had got interoperative social media we would have hope that activity pub would be the standard one we didn't get it for political reasons as I said we hope we will get it we got the written promise in the clause that in three or five years there will be a review of the law and maybe then we will get it but this is still we have to be defined we skip the second question which looks to me to be entirely an intra-EU question interesting I'm considering that time so just the third one was the did you see Olaf's talk earlier about considerations for mandating open interfaces well this was part I think now the result will be that there will be open interfaces and not open standards I mean it's unclear the implementation phase how it will work we hope that it will be open so that there will be space for the community to propose even technical measures to the commission but we don't know yet but I think the question was more how do you evaluate the current awareness of policy makers on the opening up of services you mentioned trying to explain to politicians what mastodon was how terrible is the disconnect between the political objectives that are quite properly being pursued and the sort of technical realities we do have the technology that we need if we wanted to do this I mean there is the need to develop for example the identity layer is the hardest part because we do not have a global identity layer so how do I identify people with certainly across multiple interoperating instant messaging systems maybe using mobile phones but the identity part I think is in general one of the weakest part of the internet architecture especially the free open federated internet architecture I mean they said no I think the technology is there the difficult part is that even by the domino platforms it happens that sometimes they have policy objectives they try to force them through the technology it should be the opposite so you should have the policy objectives agreed on them and then the technology should follow to meet the rights of the users and the objectives and the regulation and so on give or take reality limitations the policy objectives have to be yeah of course you cannot mandate I'm having the discussion there is a different thing in Italy in which a government wants to mandate blocking all the types of encryption because of so you have to explain that sometimes this is not possible but I'm worried about this because I do see companies using these dominant companies using these as an excuse basically whenever they try to the politicians try to address their dominant positions they say this is not technically possible in fact it is, you just need to put the effort and develop a standard so maybe it's complex but we can do it we went to the moon so we can develop an interoperable instant message we can fix this problem because we went to the moon I think it's a fair claim if anyone thinks that it is impossible to have interoperable instant messages let me know I'd like to continue this here here thank you very much I do appreciate the insight into what's happening upstream from the rest of the world but we're at time so I'll now introduce our next speaker thank you I know you're in the room and I've unlocked you and made your presenter can you please turn on your camera and then we'll get to work on your microphone while you are doing battle with that I will briefly introduce you you're the general manager of Collaborer's Office Division Leading Collaborer Online Collaborer Online Office Projects and supports customers and partners alongside the talented team also serves as director of the Document Foundation and has been involved in both ODF and OOX and authentication and a long history of contribution to open source Michael will be speaking to us about secure private document collaboration you've successfully turned on your camera fantastic but you're still in headphones only mode so there's a button down the bottom there is a headphone yep aha speak kick to me you muted now we can hear you now fantastic share my screen and we can let rip perfect you can see yourself possibly can you hear me fantastic Michael take it away fantastic well thank you so much for coming so I'm going to talk a little bit about Collaborer Online just to show you what it is and then well let's see so what is it well a wonderful digital sovereign on-premise powerful WYSIWYG Document Editing Suite so you can see some of the stacked up beauty on the right hand side there but allowing you to then view and collaboratively edit on all kinds of documents pretty much everything that LibreOffice will allow you to edit bring that into your browser a great interoperability filters for all sorts of crazy things that you might want to see and our architecture is a bet I guess on CPU threads and networking are getting cheaper so you know we don't have AMD or Ryzen threads are getting cheaper and cheaper and the more and more of them and that's really encouraging and we can use those to provide a beautiful experience it's consistent everywhere and of course the network continues to get faster so how does it look well we've been doing a whole lot of work on user experience so these are the latest shots showing you I guess one UI option so UI is heavily configurable I guess you can see our notebook bar across the top there hopefully it's got a palette of tools people like there's also the sidebar which works better on a 16 by 9 screen to fit functionality in and give you more space to see a document so obviously word processing, change tracking, beautiful stuff, spreadsheets all of the formulae and the core engine that you've come to know and love and what a powerful core engine it is in terms of efficiency of representation threading, performance of calculation and so on some spreadsheet stuff behind I mean the drawing application that allows you to do diagramming and drawing and font work and all sorts of fun things there and then presentations as well one of the interesting things is that well one of the unique features here is just being able to edit masterpages so that you can actually create a slide deck and make it look like you want it to put pieces in your slides rather than being kind of locked out of changing things in your masters great transition support and so on and you can see the sidebar there is actually well this is new, I'll talk about this in a minute the new stuff in 2111 so we phrased this last year it's now really shipping and we'll be shipping a new version I guess in a month or two based on another year of interoperability work under the hood in the Libra Office technology but anyway 2111 all sorts of good things so particularly interesting is this right to left support so if you're an Indic or Arabic Hebrew type language we can flip your spreadsheet around, your UI around and hopefully everything feels much more familiar and easy to use and understand we've done a lot to improve PowerPoints, PPTX filters particularly and various core functionality there for cropping and mirroring images adding shadows and interoperable shadow effects it's really important that your slides look as you expect them to see obviously and match what's the competition had so glow effects as well multi-context a silly feature that well it's now there so you don't see broken slides that we're using that bottom to top, left to right, vertical for text frames very very nice and right and there's another key feature here which is this JS sidebar so it's really cool essentially instead of rendering the sidebar as pixels and in fact this is now true of many dialogues in our latest release we render all of that on the browser side so this is all done with CSS and theming and styling and we can make that look really gorgeous actually this is a terribly old picture of a prototype of this but perhaps you saw it earlier in a more polished form in the screenshot and this brings several interesting things this builds on some work by Red Hat to do GTK theming on Linux so actually on Linux you would be using native GTK widgets for almost everything in the UI now I believe there's some QT stuff as well but we essentially convert that to JSON and shove it over the wire so that when things change we send very small deltas and changes so it's quicker, it's more beautiful and it's allows you to theme it and make it look gorgeous some other features we dumped in recently are anchoring of images so you can very precisely effect the layout of your documents and get that exactly right so that's kind of good and there are lots of different anchor modes and you can see there again the palette on the side and some of the good things you can do with images if you select an image we can be recoloring it and contours around to change how the text wraps around it and so on lots and lots of rich powerful text layout functionality another piece that's new is welcoming people to the app so that when you get there you can see what's new and you can explore what you should be playing with and looking at and so on another thing that's been asked for by some of our larger government customers is an accessibility checker so this originally was written for the Dutch standards, Dutch standards organization I believe accessibility standards organization and we brought that recently to online for a German government organization which we're working with as part of the Phoenix project which is an interesting bundle of functionality shipping in German governments but anyway perhaps that'll be announced the key is if you're creating documents and handing them to people publishing them online it's really important to be accessible for our users and so it's important that we can check these things jump to the issues and fix those up so we can produce a beautiful outbound accessible documents another big thing that we're doing is taking that UI particularly that sidebar UI there and wrapping that into a single hand mobile so there's some pretty cool things there that are happening you can see some pictures of that in our new release which is coming next week it should have been this week anyway we like to get these things working we have this cool new functionality here of one handed context based quick toolbar often you want to use the whole screen of your phone as you can see on the right hand side there to see as much of the document as you can and so we then provide this fast tool palette at the bottom for the things that you would most want to do for that and that's now context sensitive you can see different things for text as for images of charts or whatever you're selecting so that's kind of nice so in addition to all of these sort of advanced palettes you can access by pressing the properties properties button and get a deep dive into the whole wealth of properties that you have on things you're able to do things quickly as well just one hand so one thing I wanted to talk about and I think it's really important is getting right and having a good design here so one of the one of the interesting things here is these is how we contain and do it like this it's not as cool but anyway so can you hear me again can you see the picture we can hear you but we can't see your screen chair you can't see my screen I've lost the button to share it oh hello you let's come back again stop being presenter somehow yeah I I think there might be Pixie's helping me can you see it now yes I suspect someone is clicking a button somewhere anyway here we go so the key is to secure this 8 million lines of C++ in the center I would do lots of things there so we have this great covariate score others below the next kernel so static checking we have a ch root for each document so we stick each document in a container in its own container I guess we have a very sparse file system in there so there's no shell there's nothing it's very hard to do anything in there even if you can break out and we also filter system calls because some system calls have nothing to do with our software and shouldn't be run so we try to knock out calls that have been historically problematic and then of course you can put that in virtual machines but I think one of the key things from security perspective is you can put that on premise so then you can control your own networking you can control your firewall your VPNs you can fully lock this thing down and you can make sure nothing leaves your site without you knowing about it so I think that multi-layer onion there is really important and actually we have another feature that is really quite interesting called secure view so one of the problems that many businesses have is that they come up with a new product or a new design or something that they want they want to get feedback on it right but it's actually really market sensitive and you don't want your competitors to see it you don't want them to get wind of what you're doing particularly you don't want them to see exactly what it looks like so you have an automotive company that has this problem as a customer many other highly secure so the question really is how can you keep the stuff secret and tell people about it so you can get feedback and the answer really is secure view I don't know if you can see here this watermark on the screen it should say you know highly confidential and so on and we can put your name into that as well so that you can see these things you can share those designs and you can get feedback whilst being sure but the document never leaves your server so you know we send pixels they're all watermarked we also even send pixels of course but you know realistically it's important that you know people can see something on the screen otherwise well you know what you're going to do now there's some various other options to try and achieve the similar level of you know protection I'll say one of these which is surprisingly popular our competition did this is essentially to send the whole document to everyone with even the lowest view access just to send the whole document model to the clients and if you render your documents in the client you have this problem that you can't render the document unless you have it so you know F12 you know can defeat whatever silly policy enforcement or watermarking or stuff that's put on the top and you know Microsoft approach which is much more rigorous you know so they encrypt the document so they send encrypted documents everywhere they send their AES keys or whatever to the Azure cloud and before those will be handed out to your operating system policy will be enforced and checked so you know everything is signed from the bottom up and we check that we've got the very latest Microsoft Office that will enforce that policy if not allowing copy and paste or not allowing print or save as or you know this sort of thing and so this works it's a model of works the only problem with it of course is that you have this massive single point of failure you put all of the document keys for your documents into the Azure cloud and you know and then Microsoft has to sign everything from the bottom up so not cool I would argue this way and so our approach which then allows you to federate and collaborate with other people whilst ensuring the document doesn't leave your premise is just really ah so much more beautiful open standards you know clean browsers just a little web socket protocol and you still get that ultra secure ah yeah I'm sorry my thunderbird is pleading me um ultra secure view so one of the things I'm encouraged about and I just look at some education references here I told you I would point out some other people using Collaboral Line I left and right with some of our partners because Collaboral Line really doesn't work well by itself it has to be integrated into an own cloud next hour IDOC file egroupware high drive or something else so it's really encouraging to see lots of educational institutions actually deploying this 66,000 users here growing quickly a French public sector here health care health care health care health care health care health care health care health care health care French public sector here health care and public sector lots of things there but you know some schools examples coming up 33,000 people there and doing great stuff Leal 67,000 so lots of people around the place using Collaboral Line very actively and we'd love to have you work with us and make it all better for all of those people so how do you get involved? this is of course the million dollar question all of the code is open source you can go and grab it there and compile it for yourself either for Android or iOS or Chrome OS of course we run on as well using the Android version but there's all sorts of community resources there we have actually meetings weekly you can find out all of the minutes in the forum everyone's welcome to come get involved, development, ask questions and it's really encouraging to see a number of people in the community who are interested in that doing design and user experience and tweaking and making it more and more beautiful you can help with websites there's lots of SDK examples so if you're interested in taking this and embedding it into your tool maybe you just want some fun nailing of documents so you can get pretty images and Collaboral Line provides a really simple way to deploy that and a really smooth there's almost no dependencies for Collaboral Line apart from the LibreOffice kit or Collaboral Office underneath it which we bundle into Docker images for you and actually one of the even more fun things that we have is this tool called Gitpod which is fantastic so Gitpod allows you to just in your browser without setting up a development environment without having to do anything horrible you can load the code in Gitpod and you can be just at the point where it's all pre-built it's all ready so if you go through all the steps you can change something and you can see your change very very quickly without needing a large build tree or lots of effort to do that so hopefully the barrier to entry there is really really low and you can play with it in these various harnesses there's all sorts of things that we can do to configure the UI hiding menus and toolbars and changing things and fiddling around adding our own pieces to it that makes it very simple it's just a copy like protocol it's just a get we download the document from your URL and then we do a post to send it back again that's pretty much it there's one more get which we do which gives your username and whether you have access to the file and the avatar links the permissions that you have and so on but it's kind of trivial and that just comes back I believe that shows what you can and can't do so really it's just three methods to make it work it's really easy, have a play and there you are and you get all of this good stuff of course the mobile piece there is really just reusing the existing mobile UI our responsive UI for the collaborative editor actually natively on the device so we'll run the server and the client both in those both on the same device so you can work offline which is kind of cool and if you're looking at a really full feature you can get off this suite on a tablet for example your tablet can behave almost essentially like the desktop app if you like that obviously it has a better touch interface but you can plug a keyboard in and you have the full application there and of course it can behave like a mobile phone as well if you like that so it's very very powerful so what can I say, here we are secure of course private so you can control it of course, inevitably it's pretty, which is important and we're making it incrementally prettier I think that's the other thing, we'd love your help with that perhaps you saw something that annoyed you a pixel you wanted to move today and you can get involved and do that submit a change to the CSS or get involved in some JavaScript it's improving very rapidly we're doing a huge amount of work and much more to come I'll say a little bit about what we're working on now and we have that LibreOffice technology that's awesome too let me encourage you to contribute there there's a lot of cool stuff happening in LibreOffice and good to get involved in the code and it's really easy to get involved and make a difference we'd love to have you there and of course that's the community side product side but really taking back control of your data is probably a moral imperative we have an interesting convergence in Europe and it was really interesting to hear the talk before about data traps and digital sovereignty is the positive framing but the data trap is the negative one and so we see just MI6 in the UK which is our spy agency talking about the dangers of surrendering all of your data and this data trap that we can't accept losing our sovereignty as a country but then you also see the pirate party on the other side European Union talking about the importance of digital sovereignty and they're basically all singing from the same hymn sheet so my hope is this is an idea whose time has well come and get out there, get out the head of the curve deploy it and see what you can get and there's this really unique feature there with SecureView I think that's great, not just unique you can have it on premise, control it and be confident in it but also that you can then share and collaborate and you can federate that with other people's instances and that's a really useful business tool to be able to share secret information, get feedback interaction and so on we're very widely deployed, 80 plus million docker image downloads millions of paying users out there we're thrilled to be able to turn your support as customers into new feature function and great new open source so thank you very much and that was my talk do we have any questions? in fact we do fantastic I do like the observation that the the Spooks, the Pirates and the POS communities are now singing from the same hymn book that probably means we're doing something right, that or catastrophe is imminent but let's go with that so the two questions and I have a comment, time permits from Harish, how interoperable Collabra and Libra Office and OpenOffice, he assumes it's very high what are the possible gotchas? yeah yeah yeah that's a good question, so very interoperable, so I think one of the tricks with interoperability is to have the same code everywhere, so we have a common code base and Microsoft is quite good at this, they have the same Microsoft Office and lots of machines and you can share your document and that's great although it's interesting that their online office is a different code base and you see that very quickly because Microsoft Office online can't load its own document format, you know it can't do this, you can't put charts and word documents, you can't do WYSIWYG editing, there are many features that it just won't load the document protected forms or this kind of thing and usually you can export OpenOffice or Libra Office documents well so hopefully IGF is everywhere and we should use OpenDocument formats but I guess one of the great things about having an online in browser experience is that you can run this in any browser or any platform and you can get access to your document you can share that document with other people so of course OpenFormat is a really important still for extracting data and processing it and migrating it but from a user perspective you never see that format issue because hey, it just works right so I think the point and the racist point was as the internal structure is different then you end up with we do see inconsistencies between Microsoft desktop and browser versions and I think Harry's question was is there any of that in the collaboration versus OpenOffice so I think between Libra Office and OpenOffice OpenOffice is I guess not something I'm an expert on but it is missing a huge amount of time because OpenOffice is an evolutionary for example Sparklines, we're implementing Sparklines it's a great feature, it'll be shipping in the next Libra Office it'll be shipping in the next Collabor Office it's cool but OpenOffice doesn't have it and so if you have a Sparkline there's no way it's going to render an OpenOffice for example OpenOffice to one side so then interoperability is a big problem and again it's down to functionality mostly so you need to obviously interpret the formats that Microsoft has Libra Office and Collabor absolutely so when it comes down to the code base Collabor Office is built on Libra Office we're huge contributors to Libra Office we're out there with the top contributor all of our code changes go upstream so in the limit everything is fully interoperable we love Libra Office that's cool the other question a more promotional one just wants to write a document from time to time so Google Docs is very convenient how would you promote something like Collabor to a user like this so of course if you run an xCloud or a runCloud there's this kind of thing you can get that functionality and it's very similar and there are sets of people I think OpenExchange is involved in the Phoenix project which then provides a big bundle of email and document editing and file sharing and there's lots of solutions for that I think one of the things is that if you want to control your data and digitally sovereign you need to have someone in your country that hosts that or do it yourself so that's quite a high barrier for a casual user so what I suggest is that your good lady asks you to set up a server for her they can do this or maybe you go to an xCloud and buy one of those but I think the other thing to recognize is that the contract with these big majors is you surrender your privacy you give them all of your documents and all of your email and everything about you and your social graph and then in return you get free document editing and that's not something we're interested in as a contract so I think then you need to think about how it's funded and properly paying for a service there from someone thank you very much this is a good reminder of we're doing pretty well I spent many years of my life fighting a never-ending feature battle against Microsoft and you have to have all the features and it's distressing when something is tendered and the tender mentions the latest, most obscure not at all useful Microsoft feature just so they get Microsoft Office and not anything else but actually when you look at the online experience you know we are kicking some backside here in terms of feature function we are doing better better fidelity rendering, we interoperate better with their file formats than they do in their online suite so I'm pretty excited about what we have here and I think we should not be ashamed of it in any way we should be driving it out into the market as quickly as we can so I'm now going to consult you you do better Microsoft than Microsoft well it is like this too isn't it you have to be a bit humble but I'm sure there are drop offs there are many features we have that they don't many thanks Michael I know you're in the room and I've unlocked you you should be able to turn on your camera you should be able to turn on your camera hooray now we have to do the microphone dance towards the bottom of the screen you should see a blue button with a sort of microphone oh share your screen oh I haven't made you a presenter I'm sorry now you're the presenter you should be able to share your screen hit the plus button at the bottom left definitely think you're the presenter you really should be able to press the plus button and share your screen not again the good news is that like 9 out of 10 people have trouble getting in oh hello now unmute yes now works you can hear me yes I can hear you okay so I have to leave the audio and then get back and you can make my video full screen because I can then control everything in my site yes okay let me do that one moment so I will do a brief intro and leave you to it so Kuchel is a public interest technologist working at SUNET SE on various open source projects related to security and privacy it's a C Python core developer and part of the core project core team a regular speaker in various is this me or Roland or your colleague hi Kuchel I think Roland dropped out for a moment take over but I can continue that's not a problem yes not sure if I am down nice seeing you yeah so how are you these days how's the family doing in corona times are you guys like better generally yes we are so just not giving talks for a long time so slowly getting back into the practice and just missing the whole physical thing back in Singapore of course maybe next year well we'll see maybe here maybe there I hope we have opportunities right we hope for the best but yeah there are a lot of challenges and in the world and there's war in Ukraine and yeah things like that happen it's bad but I hope we can set an example by showing how the open source community collaborates and secures people and this is what you are working on a lot security so Kuchel may I ask you to introduce yourself then and I leave that just the session to you okay yeah we'll do that part of the talk okay perfect thank you very much then let you go forward thank you I hope I'm ready for the talk so a little bit about myself I'm a public interest technologist at SUNEC where we provide secure and stable infrastructure to the universities and any other organizations who are working in research or higher education to collaborate nationally and globally I'm also part of various other free and open source software projects and organizations for example I'm a director at Python Software Foundation and also co-developer there and also part of the talk project and happily wearing my talk t-shirt during the talk so for the talk I will go back into history a little bit for the people who are like not into the history part of group PG or open PGP or GPG the things started by Phil Zimmerman back in 1991 it's called PGP pretty good privacy then in 1997 we got the open PGP group under IETRC and there is an RFC 4880 you can go and search for it have a read and you'll find the corresponding future RFCs working on this part and then later on as part of the GNU project we have GNU PG which is I think the most common way we all know open PGP for daily life like the GPG command which is used almost every other Linux distribution and various parts of the security and privacy in the ecosystem, whole ecosystem I should say now as I said that I'm a developer I write a lot of Python so part of my projects always required to access GPG in other words to access the features provided by GPG from my Python code and to do so there are couple of existing modules one is Python GNU PGG the other one is GPG ME which also provides libraries to call things from Python but all of these modules they have a way of calling GPG that is actually calling the GPG command not as a library but inside of this modules they call the executable binary and pass along all the millions but hundreds of different command line options and those get used by the GPG binary and you will see the output if you are encrypting or doing whatever kind of operations via these modules and as the GPG itself continuously improves and the command line options change new versions come out all these modules also have to make sure that everything stays the same and it can catch up and time to time we saw trouble in that part so I was hoping for a better way of accessing OpenPGP from my Python applications for a long time now couple of years back I noticed this thing which is to my friend Micah Lee a project called sqa-pgp.org it's a new OpenPGP implementation as a library now this is one very important point it's an OpenPGP implementation written from scratch as a library that means anyone else who wants to incorporate OpenPGP they can do it and it's written using a much newer and safer language Rust so combining these two it opened up a a complete new world for developers to make sure that they can use OpenPGP the way they want and also I should mention that this project and the community is very much welcoming because when I started talking to the people here in SQA I had a lot of questions both at the Rust and at the OpenPGP level and those were answered like at every level from beginners questions to the advanced questions which I have but there was still another part of the problem left that is Rust and as I mentioned I wanted to use OpenPGP but not from Rust from Python here comes another project to rescue PyO3 so PyO3 is written in Rust to make sure that you can write Python extensions or Python modules in Rust so this enables us to use all the features written by the SQL upstream and then export it as a Python module so that we can just import some module name and do the work like to use it the way we want so this is exactly what my module is all about that is jonic and encrypt the name I'll get back to that name later on so as I said it's a Python module that means you should be able to install it via pip but because of the way it gets built you have like I do not publish any wheels so you have to do like when you do pip install jonic and encrypt that is it will fetch the source star ball from pipi and it will build it on your system and because it's written in Rust you will need Rust and the other dependencies to build the Python module and this is like a standard behavior for many projects where the wheels are not being provided hopefully in future we'll have wheels now we'll see couple of code examples on the slides question I can see the chat here like is this font size big enough or I can make it bigger that's one question still remains but I'll just go ahead so at line one we are importing the module jonic and encrypt and Python allows us to import a module with a different name in this case I'm using jce a sort form because jonic and encrypt is pretty big word typing creating a key store I can create a new key with a default password please don't use such small simple passwords this is just a demo for the top and we can ask the module to create cv that is the curve 25519 key jce the jonic and encrypt module also enables you to create rsa keys rsa2k and then you can save the public key you can save the fingerprint or like print it just like any normal Python objects and because we are talking about open bgp keys in many cases we want the keys to have some expiry dates in this case with we select a date and time three years ahead and then provide like all the sub keys will be expired on that date given date and this is using the standard Python date and time module to create that extension now because in many cases we will have existing keys so the module enables us to import any available key we can import the keys because the module is still in development like I'm not saying the early phase it's there for more than a year now but maybe we can improve the API names this particular import method will allow do the same for private keys or private keys now the interesting part encryption and decryption so we can get any key out of that store using get key method then use it to encrypt some bytes and we are just doing an assertion and then using the secret key we can decrypt the value also and for the decryption we need the passphrase or password and because this is just one single example out of the documentation I literally copy pasted that's why we will see that for the get key method it's the same fingerprint and the same store I will do some live demo at the end and there we will use two key store and make sure that we encrypt with public key and then decrypt with secret key separate operations in the same way we can encrypt and decrypt files too we can provide a particular file and then say like where the encrypted output will be and in the same way you can decrypt the file or file handler that's available in the API to the same encryption and decryption operations but the method names are different decrypt underscore file decrypt underscore file handler third major operation which I think all of us should follow as like whoever is doing open source development that is some sort of providing options so that our users can verify that the release is what we intended to so we generally sign our release star balls with that key and this is just one example where I am signing a file and as a completely separate clear text the signature now the opposite can also be done whereas the verification and this like piece of code is actually taken from the documentation the introduction page where I am importing the cert that is the key for top project and then verifying that we have downloaded top browser top ball because this is I believe was one of our major real life use case that is we download various softwares and then we verify that it's the correct download using the like you know the signature they provide so in this case we downloaded the the key the actual top browser and the top like the signature at the last line we can say verify if I with a detached signature and it says true verify at this moment when like I managed to reach up to this part of the project I thought okay I'm done I can start using it I use it in all of my projects and personal projects everywhere but then one other thing stuck at me is that I don't have my secret keys on my computer the secret keys are on hardware devices in my case particularly so that created a next big problem for me I don't know how many of you know about you because like actually in the slide I have a image various different you because these are the different versions I think from the fight series you can use these for multiple factor authentication on different websites and you can also store your secret keys secret open PGP keys on these devices so as I said in the beginning that we'll do some live coding and we'll do using the UB keys so that you can see that this vital model will enable you access the you know the smart card in this case the UB keys for within your project I really go back to the camera and I hope you all of you will see that I have a one five I think USB C UB key I will attach it to my system and then I'll try to go to Jupyter Notebook so in the first two lines by the way if you have any queries at this moment because next few minutes I will try to talk about code feel free to ask in the channel so that you can follow it up and again I'll try to make it even a little bit bigger just in case so the first line again I'm importing the module and creating a new key store dot slash store that is the directory if I go to the terminal and you can see in the directory I have the store that to empty actually empty directory store store 2 for our demo purpose and the ipython notebook which one the terminal of here can you read this well this just about but the terminal is very small we have the information some people are watching on the phone screen so whatever is possible is great I guess okay so I will try not to go to the terminal much like I will just issue one command I can say that you will understand so I hope this font size is good so in the first step we again importing the module and creating a key store I'll run it done next I'm going to create a new key with a password and this is the name first name is our name and email address and I'm saying we're going to create a curve 25519 key I'm going to take the public key out and write it in a file and also print the fingerprints so I will execute this you can see that it took some time because it had to generate the key save it in the database everything and then prints the key Johnny can actually internally uses SQLite so in that store directory where we created the key store it actually contains a SQLite dv with the encrypted secret key next I'm also saving the secret key out here like I'm saying that let's export the secret key encrypted format this is not decrypted as you can see I'm not passing any password or anything I'm just directly writing it to a file done as I said Johnny can encrypt is written in Rust so the module itself will provide right now provides to call those functions written in Rust directly from Python and that module is actually inside of Johnny can encrypt and the same name so I'm just importing it as rjce rstand like helps me to understand this is the last part and here are some interesting API for the smart card access the first function call is we said ubiquity it took some time and we did the reset on my ubiquity so now if I do a get card details I will see that the key is empty with the default values it contains a serial number of the card name URL and all the signatures everything is empty at this moment the signatures are actually the bytes itself directly so now I can say to set a particular name this is the format we have to provide it and by default the ubiquity is admin pin is 1, 2, 3, 4, 5, 6, 7, 8 so I'm going to use that so I'm also going to set an URL to some random file which does not exist in this case just for future research if we want it goes and then I can say the key we created the secret key please upload it to the smart card and we'll notice that this will take some time it's doing the work and it's done so if I do the card details now it contains the serial same serial number it contains the name the URL and the signature details and I can say to use the same key to encrypt a file now here you can see I'm encrypting a file called m.txt and the output file name is m.asc and before I do that I need to create that m.txt so I'll just run the I'll just run the date command into m.txt if you don't see the screen size as big it's okay because it's just the output of the date command that I'm doing it like coming back to the slides so to verify first okay let's encrypt the file it says true the file is there now to verify that I can actually decrypt using the UV key I'm going to create a new key store which is the key store 2 and I'm going to import only the public key and you can see here that imported the part which I imported is actually a public key and then I can say that this key store is to sync the smart card that is to make sure that the key store knows which secret keys are available in this smart card and now we know it's the same the fingerprint now again I'm just taking out the public key in the same way we encrypted before to make sure that this is the public key just to show and then I can call decrypt the file on the card using this public key and the output would be decrypted text and by default the phrase UV key will have a normal user pin as 1, 2, 3, 4, 5, 6 so I'm going to use that to decrypt and it's done so if I go back to the terminal and then get the decrypted file it's the same time as encrypted value that's the major part of my live demo now if I go back to the original slides why johnny can encrypt you know I talked about one part the other part I believe it will help us to enable is to have better UX in secured applications better usability, better to help the developers to write better code with same defaults and that's not something like my python model is doing actually that is provided by the secure the upstream itself like a much better center defaults for everything we do and the key operations so using this johnny can encrypt module one of my friend that is we created a project called tumpa it's actually a GUI application which right now this live the gif is on tails but this also works on mac and this will allow you to create keys using this GUI and then just click click click and upload it to a smart card because anyone here who is watching this talk who had to go through the steps or I should say pain of creating keys and then uploading it to a smart card properly and makes sure that it works you know what I am talking about and for the people who never saw like people trying to upload keys to a smart card you should just ask around what people think about it like how much time it takes or how much pain point it is to upload keys to smart cards so this particular GUI application hopefully will enable everyone to access their smart cards in much easier form and you can like all the things I showed during the demo you can do the similar things using the UI that is setting names URLs what kind of cards etc now the name part because again if you are into this community for long enough you will hear a lot of things about encryption and OpenPGP and there was this famous paper about why Johnny can't encrypt so the name of my project started from there and it started just as a fun project name and it just stayed there as of now you can read, you should read the paper because that talks a lot about what are the troubles people get when people using OpenPGP and I believe Sikoya and related all different projects will help to overcome this issue and with this I think I am coming to the end of my talk that's the link to the documentation and I also write regular blog post on Kusheldash.in and it is given to Kusheldash so if you want to check it the documentation and the source code on GitHub and then maybe if you have any queries come and talk to us so we have a couple of questions in the shared notes in a couple of minutes to grab them unless Mario wishes to step in and start the next session but all three from Niko I think they are do you build Python wheels for Donik and Encrypt so users do not need a Rust compiler to install them I don't think that makes sense do key stores need to be directories can you also store keys in other places or objects what are the advantages right now it's using file systems so it's the directory where inside of it we'll just store up a split file and can you provide the Jupyter Notebook link in the shared notes so we can experiment with it I will upload now and then you can try it out like it's not uploaded thanks for the idea and I'll also add a link to the my slides so that you can find it easily fantastic Mario are you stepping in now or shall I yeah I'm here so let's move forward with moderation Roland you've moderated a very long time right so thank you very much for that I'm now going to bed still with my presentation slides for my presentation so I'll add what you call the notebook link there it will be here cool maybe you can also publish it here in the chat on the event so wherever people find it thank you very much we have the next speaker lining up already good to see you hope to catch up soon again thank you very much and Roland thank you to you too it's been a fantastic time listening to you listening to the questions and comments how you interacted with the speakers really enjoyed it thank you very much security and privacy is my favourite part of the conference apart from the drinking which we'll do next year thanks Mario good night all see you tomorrow okay and we are getting started now with the next session so let's set up the speaker it's Nico Kunzmann Nico are you there I probably have to give you additional rights here in the session and let me so you should be able now to switch on your microphone and your camera but I'll give you presenter rights did you also unlock yes I unlocked Nico so Nico you should be able to activate your webcam and activate your sound just there's a headphones symbol icon okay so I see he's coming on and Nico will talk about Caleta events in Python an open source library so Kuschel just talked about Python and Nico is a fan of Python apparently too about Nico Kunzmann Nico studied IT systems engineering he likes open source work founded Koder Dojo in Potsdam in Germany and Nico is living in an eco village Nico are you able to talk to us please activate your microphone hello can you hear me yes perfect so wow I didn't know that you are living in an eco village you have been at the first Asia summit quite a few times that you trained a lot of students in the community so how is life in the eco village yeah it's challenging and quite nice can you understand me well when I'm standing here yes we can still understand you well very good I get my own vote I'm closer to nature and that's quite nice I'll show a picture perfect so yeah bringing tech and ecology together and a lot of other things I think we'll learn a little bit about your work here in this session it's half an hour but hope to see you also like sometime soon face to face to learn more about your work and anyways it's great to have you here and we're really looking forward to your session so I hand over to you here we go thank you Nico for joining thank you very much yeah hello and welcome to my talk so why do this talk is basically I wrote an open source library which I thought only I need and then I did a few years later and it has 120,000 downloads and per month so I would like to take you on a journey with me to explore a bit how could that have happened so and yeah this is a library specifically working in python working on online calendars see oh no to do that yeah for the structure of the talk whenever you have a question there are the show notes and you can write your question in the notes maybe I can address them while I'm talking maybe I can do this in the end when we have time for questions and answers the slides are also online you can find the link in the show notes so that you can click through the slides go up and down click on some links there and you can basically follow your own rhythm I'll start with talking about the history of the library and then go into the different factors that I see as a success factor for your open source project and to name that I'm just an open source developer who does hobby projects I'm not part of any big company or any organization but still I could do something that people use and so can you yeah a little bit about myself I've got a website but I'm living in the eco village T.P. Valley right now I live in the U.S. with my family together in the United Kingdom in Wales and when I'm coding that looks like that I sit in front of my burner get my knees warmed get my knees while I make some porridge and some tea and basically I'm offline maybe once or twice speak I push my code and listen to some emails from issues and that's basically it so you can become a full developer of libraries and everything you don't need to have a computer or something like this or a stable internet connection so that helps now about the library so a few years back we were in Brandenburg we just created a network of different hacker and maker spaces in this rural area of Germany where nothing much is going on in the digital world and we were faced with this question of how can we tell people when something happens somewhere else so that they can go from one village to another from one place to another so we wanted to have a joint calendar and all these maker spaces they already had calendars where they put in their few events that they made and they shared them in an open format the iCalendar format WordPress at that time provided a solution so that we could join different calendars and put a website up with one calendar but I wanted to go a bit further and make that easier for everyone just without hosting and also highly configurable so that other places can show calendars in different ways without needing to set up the WordPress again and I created the OpenBab calendar project so you could have static site find an ICS a calendar link online somewhere and display your calendar on your page now that's they already started facing the issue having the online calendars the ICS files how do I display these events and the code for that became increasingly complex so that I decided to carve that little thing out and put it into a known library and when I look at it now it has like 120,000 downloads per month and I don't know even who uses it only it's useful for people and yeah I want to explore with you how this could have happened yeah so I think one of the factors for the usefulness of the library is that I have the use case myself so when I started I just had these calendar files they take one calendar and I get all the events out of it and of two years and put it up on the website so problem solved for me everything's fine I carve this little question out and put it into a library of between start and end which events do take place and it has an easy interface just one function and I put that out into the world I added a license to it I wanted to make it open source and I chose the LGPL license that is because I'm again just a hobby developer and I do like it if I share my code and I get something back that I can just click merge and so I like the copy left aspect of the GPL that I do not need any contributed license agreements and that also people who use that library they can be sure if there's a merge request and they use that they also have the right to use it and they don't need to make anything else more sure and I didn't use the GPL because I just wanted to try out the LGPL that's my reason behind that and with that I had a use case a very small one a very rough one I had the license to it it suddenly became a common use case and now people can use it and there's two functions so add the date for getting the events of a date a month a day maybe and the between functions still for a time range in which you can receive events and another important thing is the interoperable base so then you host your iCalendar your online calendar somewhere and to download it you have this file and the iCalendar module was around for a long time already to read these files and use them in Python as an object structure and these these functions they basically take this object structure and give back this object structure so there is nothing new that people need to know it's basically everything is known plus a little function that you need to read up on so I think this one makes it quite easy for people to actually use the library one of another factor is the documentation first so I do like documentation a while ago I read this blog post about write your readme file first and I want to stress the point that this is quite something nice because when I write my readme file first I think about how do I feel comfortable with using my library what would I like it to be and on the other hand if I write the code first and then the readme then it's more like how all these technical specialities all these technical little things they come creeping through as parameters or as heavy thoughts in my head of how I need to pass that on to the user but actually I don't have to yeah and with that also it becomes clear for other people what my library should do and what it shouldn't so when the readme file says this is what library should do people come back and tell you hey it doesn't do what it should here that's the example yeah but if I just have a sparse readme no real documentation then people will be wondering there's a mistake is it my mistake is that the mistake of the library should I maybe write a wrapper around the library so I can fix this use case something like that here but if the readme file just states boldly yeah that's what I can do it doesn't do it it actually is a good thing because people come back and tell you that yeah what I did also with the documentation is just a few answers on-stack overflow of hey you could use that library by the way and no people start to find it and use it yeah and another factor of I think what contributed at least for me to this library is the feedback before yeah I had this this open web calendar and I had my library in it and I would get feedback only from the users of the open web calendar library which isn't much maybe all this event was displayed a bit falsely yeah but now this library is used in many many many other places and I get weird calendar files that people think should work and we can like smooth the edges of this library and let also improve the main software that I wrote so actually sharing these little pieces of my code where other people improve my main software this is one of the learnings that I had in that place here and they get a much higher quality in the end yeah and then there is the tests that I do like I call it test second because I do read me first yeah other people call it test first and yeah I do write this library in a test driven way because I like green dots yeah no it also has other advantages that are not just the visual for example the knowledge isn't lost over time so this library is around for a few years now and I do not want to sit in front of the library and ask myself this line of code should I change it or maybe should I leave it for later I know that when the tests are green I did a good job and and now when the tests are green it allows me to say I also did a good refactoring yeah there was a time when my initial way of dealing with the calendar standard was not enough to reflect the complexity of what is going on so I needed to do a major refactoring of all the code and the test helped me tremendously without them I would have probably broken half of what other people would have expected right so with that with the tests you get a long term maintainability also and that's probably the place where you choose if you have your software project you may have this really just small function that's useful for you that might be useful for other people if you like cut that out put it into a library like this one you may find that the input and the output is quite known it's like a unit from the unit tests so we have the calendar files we have the times at which we want to know which events take place and this is the third thing we do have the events we can look at another calendar that works properly and with that the whole function is 100% testable and we can get a really really high test coverage for our code so that enables us with another with something else that follows out of that because when we have these tests and people start to create pull requests after a year or two we can actually deal with them fast we can get the example we can write a test and so on and when people want to look around for solutions they look at the library and if they see there is no release and no commitment in the last five years they'll think like I do at times if I use that library and I run into a problem I guess nobody will fix my problem and I'll be left alone so they'll look around for other solutions but if the last commit is from one month ago or maybe half a year ago for quite stable ones then you would think I can use that one so having these tests in place also allows you to actually have faster feedback and more feedback and I'm really excited about pie test and I want to show you why so if you look at the way that we do deal with issues in this library because it can be tested pretty well we find an issue, we get the example file we write the tests and then we implement the change and the test may look really easy like this three line I have basically it's a function that takes the list of the calendars that we have all the events from the calendar named issue one at a certain date and then it makes sure in line three that there's only one event and you may think that this function only runs once no it doesn't, pie test allows us to run this function several times in several different environments so for example we want to test it on the calendar that we got issue one but I also test it on the calendar with all the events in a different order so that I make sure that the result does not depend on the order of the events because they can be ordered like they want to be at least in the standard and also a third time because Python has its own, its old time zone implementation called fits or pie tset and this is quite old now and shouldn't be used anymore you should migrate to the newer one which is called zone info which is available from Python 3.7 and upwards so pie test allows us to run all the tests on the old time zone implementation and on the new time zone implementation and with that, with this parametrization of tests that pie test makes so easy you can make sure that the module works on the latest versions and on the older versions so that's about the tests I like green dots too I can multiply the tests with 3 or 4 if I want to so I can get more tests and one point that I would like to say is I'd like to put the complexity last we have this open specification for the iCalendar format which is called RFC5545 and it includes events and time zones it includes journal entries, alarms to-dos and free busy times and many many more features but if we can hide some of the complexity from people then they are quite grateful just looking at the events here we have an example if you open such a file it begins with an event then we have a start time in line 3 it is in a certain time zone which is due to see at this place in line 4 we say this event repeats each Monday in line 5 we say but the second one doesn't happen by the way in line 6 we say we reschedule the whole thing to the 26th of March and then the event ends so if you can hide this complexity away and we just say give us the past calendar is the output the list of events that take place when you want to know it then that's quite a contribution and we already reduce some complexity we don't need to import much much more like alarms and so on and another thing is I guess the relationship of the ecosystems one is for example we have the iCalendar as a base that's already quite known and we just plug something on top and basically give the same stuff back so people are not they don't need to learn so much and that's easily achieved I think with shipping your small functions over to other people and the second one is the iCal Events Library it does about the same as the recurring events library to also download stuff and again I didn't find that in the right time or I didn't like the documentation of it that's why I wrote my own stuff there's the ICSPI Library that if you just want to give you a value event but you don't want to read them you can do that easily with that one there's the open back calendar which is again where this library was created from one example of an application that sits on top and the XWR time zone library and that was created because major players like Google are so big they say I don't need to work with the standard so much I add this little attribute called XWR time zone into the calendar and that kind of completely changes how the events are being evaluated and then other people have a guesswork of what could that have meant or not and this library follows the same idea as I just showed in my talk the factors it has the use case for that library to take a non-standard calendar and produce a standard calendar so that we can query the events at certain times it has an open license that is open source and we wrote the documentation first we wrote a lot of tests the complexity again is hidden or it's like reduced because now people know I just need to deal with standard calendars and it's again a carved out little piece that could have been in the Recurring Equivalence Library but instead there's another piece here and we already got the feedback like one pull request that went to this library to fix one broken evaluation the ecosystems' relationships are the same with this I would like to end my talk I hope I have inspired you or if you're writing your own source code also in your hobby time you can think about this little function maybe I can just ship it off as a different library on some platform and then people can use it independently of what I do you get a lot of engagement you may get a lot of engagement with it you may get improvements because it was used in places that you couldn't have thought of and that's basically my motivation so even if you are just on a smartphone you can do something that people want to use and want to develop with you so thank you for your attention and I'm happy to hear some questions I also want to thank for the presentation engine this is a website the first Azure Summit for being here and many different ways which this is my possible also just developing on a smartphone so I leave the factors slide open so we have an overview of what has happened in the talk and I'm happy to hear some questions yeah so we have a few questions so first how important is marketing in your library it would be really nice to get user feedback so for me it's like I do not use the internet so often so I don't really do marketing I just put it on Stack Overflow a few years ago and basically it's used now I think it's more that people Google hey how can I do this with Python and then they go through the list of solutions and then they find there is one and if it works for them they use it yeah so I don't really do any marketing okay then one question that I have here so the challenge with online calendars is that some things work well on one calendar app some on others for example there is an issue with links and rich text in the event descriptions in calendars and they display well on some calendars and they display as HTML on others it seems like some aspects of calendars aren't fully standardized or none of the calendar apps follows the standard do you have some insights here yeah I also encountered that if you look at the when you subscribe from the first to the first Asia calendar from event you see that there is HTML code in there and the one of the things I think is that there are that maybe there are recommendations for how to parse it in the standard or there might be a non-standard attribute that specifically targets HTML things and it's the way that sometimes it's just easy to copy and paste the thing in without putting it through something that removes the HTML code and yeah so that is still there the standard is quite expressive in many ways and as end user you will probably recognize when the description is bit weird whereas it's I think mostly the apps support then displaying the lines at the right time which is I guess most of something that would be wrong or bad if it wouldn't be right so yeah yeah so yeah I also wonder for example like especially like here with the event j calendar and the description and so it seems to work for example on Google calendar it doesn't work on Thunderbird but Thunderbird has an issue that people work on like already for over a year because like Google calendar on Thunderbird is the same issue and then like outlaw calendars a lot of businesses still use outlaw yeah so it's a I wish it would be easier and they would just all follow one standard that everyone else can also stick to right yes yes so then there's a question here from Mark Mark is wondering as where exactly in Germany is that equal village but I thought I heard that it's and does it hinder you working on tech I could say it would hinder me to work on tech in the ways that people in the normal society work on tech yeah because when you live in an equal village or in this place your your rhythm is more in tune with nature and you slow down much more yeah and now I'm here in Germany in a house yeah and I since I've got more sensitive through living in nature I what I was here before yeah but I didn't recognize like that I get like anxiety sometimes or just stress out of nothing nothing yeah so yeah so there's a different way of living and so there's a different way of working with tech even another thing is that what we did is what we used to say is that we don't use the internet but we use the human net that is something that we have forgotten to use it's like when you calm down and you ask the universe for something then answers will arrive yeah and you don't need to use WhatsApp and do it now but you can kind of trust that solution will arrive at the time that is like convenient to everyone and not just to the haste of needing to do it now yeah so we are going calendars then right because I'm quite detached from that at the moment there I don't even know which weekday it is so yeah it's not a hindrance it's just a different way of doing it and I think it is not considered in many places to have software run offline but for example the apps of an asteroid are open source and generally people publishing open source software also keep in mind that they don't need to like keep people with them yeah like put their claws into them but they are more freely giving and with that they also allow people to work offline without the dependency and the oversight of a big company behind it so that's also something that I value open source of that work yeah okay so yeah nice to hear that you always have some interesting things to share from your life always some exciting things happening and yeah what is left for me is to say thank you very much for this presentation it's useful application for many users I also wonder how we can use it more in the for Asia community so I hope a lot of people are inspired and also look at the project and check how they can integrate it potentially so I think in that way you're quite good at marketing yeah so please send regards from myself and also I think from the whole team to your family and to the people around you and I wish you a very nice evening thank you I wish you a nice further conference I'll be checking in and listening and see you okay thank you okay and we have a next talk coming up soon and we are moving now from Python topics to other topics and the next talk will be about Kotlin sealed classes versus Java sealed classes hashtag Sliglass from Lothar Schultz and yeah please give us a couple of minutes to set Lothar up here and then we will start with this session so Lothar I just unlocked you here and so you should now be able to switch on your cam and you also need to activate your microphone so there's an option on the bottom of the screen for you where you can click the headphone symbol and there's a pop-up then and you need to choose the microphone option for the pop-up you still have the headphones like on the bottom of the screen there are a few symbols like in blue one of those symbols is the headphones please click that one again no no no you need to click again two times click two times and now you have the option to activate your microphone yes perfect thank you easy so well done and Lothar I don't know will you be sharing any slides do you have PDF or web share so no slides but IDE so I need to share my screen yes you should already have that option still disabled yeah so I think a screen share you have the presenter writes already so there are also four buttons and the bottom on the right is the screen share yeah so I need to change my preferences so potentially I have to restart my browser but let's see okay yeah no problem we are here we still have a couple of minutes don't worry take your time okay so I need to restart my browser I'll be back soon cool we are here and yeah I just want to have some conversation here with the audience then so if you're here at the live event I know a lot of people are watching this on YouTube or on China or elsewhere so China but if you're here in this session I'd be very interested where are people tuning in from if you would like to share where you are right now that would be interesting so please put in the chat your city or region and the country where you are to have a bit of a feeling where are people here in this session where people coming from I see a lot of different names yeah we just also heard Nico is always like between the UK and Germany now so we have a few people here in the European region but I also see Japanese names I see Arabic names maybe or is it Indian I don't know so if you like to share where you are from then please edit here in the chat you can also edit later whenever you have time just like so we get a bit of a feeling and who we are reaching here with this event and I see already some people posting here right now and Saeed for example from China and India or like here we have somebody from Kanto in Vietnam so that's really nice yeah please guys interact we can't be together here we can't be right in the in the same space like at the usual events but we are connected we can try to connect with each other I see Lota is back now and I will unlock Lota again so Lota you need to do the same procedure that you just did you please do it again and yeah okay I see you already coming up and the same thing with a microphone again perfect yeah because in the past we had some people who said oh yeah they switched on let me try out the video and so on and it was in the middle of a session so we decided okay let's unlock the people so they don't do something wrong by chance can you test the microphone please can you hear me yes perfect like crystal clear fantastic and we also see your slides there's still this height button maybe your height that you are sharing the slides so we have them nice and blue and focus on your content yes perfect and yeah we are all set then for this session and I would like to introduce you Lota so Lota Schultz is the head of engineering at Myro or Miro Myro I think in English and yeah Lota is a communicative enthusiastic and dedicated manager with extensive experience of software development and software life cycle management in the mobility retail internet mobile telecommunications and font industries he's able to initiate projects and give new impulses with a proven leadership skills and involving motivating teams to achieve their objectives first class analytical and problem solving skills he's able to adapt very quickly to fast moving in international environments cope with pressure and set priorities keeping track of the full picture therefore knowing thoroughly what needs to be done in all fields and then Lota I find an interesting fact which means you are fluent in English and love to code well I think otherwise you would not be here and we are very pleased to read that and yeah excited to have you so thank you very much for joining and the floor is yours thank you very much thanks Mario for those nice intro words today I want to talk to you about Java seed classes and Kotlin seed classes and I promised this to be slightless and what you see right now is not a slide right it's a myro board I work for myro we do boards that's why I use of course myro boards and in case you want to follow up with that very small myro board please scan the qr code and then you can you can have those information Mario gave a longer intro than I wanted to introduce myself so just in case you want to follow up with more social media stuff I do please follow the links I do all of that in my spare time as Mario correctly said so I'm ahead of engineering the teams that I serve we own back end representation of all myro boards which is a lot let's go into the topic real quick I start first with Java seed classes and then Kotlin seed classes I want to show you some code I want to do some life coding with you and at the end I'm happy to get your questions Java seed classes what was actually the point that made me propose this talk because with 17 the JDK 17 which is LTS I came across that I used Kotlin seed classes before I was interested on how does it look in Java and once I dive deep into that I found out it existed as previously already with JDK 15 and the main the main point of seed classes in Java is that the way also seed interfaces exist in Java to define their permitted subtypes and it can look like this very simplistic example here we would use a keyword seed we would define our class and with the keyword permits we would define the subtypes that for that seed class and in the definition of the subtypes we would have to extend the parent class and we would have to use keywords like final or others to make that sound for the compiler and that setup Java comes with that setup with some constraints that are important to keep in mind so those subclasses they must belong to the same module as the seed class and they must explicitly extend the seed class with the extend keyword for example if you use a class and you'll have to define modifiers I showed you final before and seed non-seed are also options now let's look at how coden-seed classes look like in my mind those are super enums so this is not an official definition or official description but still this is how I think of those and the reason for that is that in coden-seed classes you'll define a lot of subclasses often as data classes and those data classes often save state and there are super because you are very very free in how you define those subclasses and I hope later I can also show that to you more like an official way of describing coden-seed classes is that those allow developers to fix type hierarchies and also have a handle on who is allowed to create new subclasses and who isn't alright that was it for the introduction now let's look at some code this is IntelliJ this is some codling code and this overall project does the following it reads a github organization and prints out the list of repositories and I hope my code works right now so let's check it out the build is successful and this looks also good so in this specific organization I have two repositories and they are correctly read so that's the organization in case you want to find it on github and I also share the link later with you and now I have one function and that function verifies the github organization so I'm going to use a library the library is the library is the github API library for Java it's initiated by KK he was also the initiator of the Jenkins CI server and that library provides a github object and that github object has a method sorry it has a method that get me the organization and unfortunately that method can throw an IO exception and in this implementation if an IO exception is thrown it returns null and in Kotlin and in many other languages we don't like to return null and now with CIIT classes I try to explain to you how this code can be improved before we can actually improve the function itself or I need to start a new CIIT class that holds those data that I want and for that it's impossible to increase the font size a bit yes let me do that oh wait I thought it was possible maybe in view or something up there maybe there is so let's assume I'm gonna do I'm gonna use this would that help yes perfect looks very good okay good I now need to switch from time to time back and forth because I need to create some classes but whenever possible I go with the with the presentation so first I need to create a CIIT class and I'm gonna call I'm gonna use that I'm gonna call this class GitHub organization and now I have the CIIT class and now I need something in there and for that I'm lazy right as any other developer I'm also lazy so I'm gonna use another CIIT class that I wrote before and I use that as an inspiration source and now I enter the okay very good that works so I use the same mechanics as in this CIIT class I want to be able to react on the success states so the GitHub organization exists or the failure case the GitHub organization does not exist for example because there is a string provided that references an organization that doesn't exist so in the success case I'm gonna define a data class and in the data class I want to hold the handle to the actual GitHub organization from the library and that is a GH org for me and now the IDE is smart enough to to give me a hint what type that could be so that's the type that I'm gonna use and I make it short and CIIT classes in Kotlin they request you to refer to the implicit constructor and this is how I do this and now I have a first data class for my CIIT class which is for the success case I'm gonna do the same for the failure case and in the failure case there will be some error message so something will tell me whatever went wrong at least I hope so so I'm gonna define an arrow and that's just a string because that's the error message and as before I'm gonna use the implicit constructor to make the compiler happy now I have this CIIT class with two data classes and now you may spot already the super notion of the super enums so this type is completely different than that type and in Kotlin you can do anything you want and you can give it any type you want for those data classes and that's something that really makes it kind of super to me and that's why I like so much the super part in super enums so he's not happy with that but we don't need this data class anyway so let's look on how we can use this so I need to exit the mode for a minute and need to go back to my original code so here we have now the function and I want to use this data class so I'm gonna wait where is my class I'm gonna put this on the side so we can see this and now again go into the presentation mode and here's my function that I want to change in the success case what I want is I want a class data class to be invoked once I do this I need as a parameter this phorg type gh organization and access instance is provided to me with the get organization call so now I have usage of my data class and now the compiler is not because the return value is different from what I defined above so see here I defined here that the return value is of type gh organization it can be null or it can be an instance and now the return value is different now I use my data class as a return value and then the compiler is happy for the success case but unhappy for the failure case and luckily I prepared the failure case and now I can do something similar then in the success case and I can use the data class in order to have a container for this and I'm going to use a localized message for the failure case in case exception habit so now for my perspective the code of the function looks way better because there is no null returned anymore and failure and success in many other cases can be handed with the data class that I called GitHub organization you may notice that the footprint now changed the return value is different so let's look on how this function is called and what we can do about this so here we go and here the compiler gives me yet another error so the earlier the code was in a way that GitHub organization in line 43 can be null so I had to deal with the fact that it's null I used the let statement to make sure only if it's not null the list of repositories is generated and then print it out now this method sorry this value is of different type so we need to deal differently with the situation and Kotlin has a nice way of dealing with those data classes which is pattern matching and what is used here is the when statement so when that value has a certain type we can ask for the type and we start with a GitHub organization success case and in case of success we do something and let's do this something I'm going to reuse some of the code I have down here and now I need to provide the right value because the compiler is not happy with that it because it doesn't exist anymore and here the right value is the org that I defined in the data class so let me show you this one more time see here this is the g.h.org that is of type g.h. organization like the library provides and that is exactly what the list org repo expects and now we are almost ready to go but I forgot one thing which is you use pattern matching this whenever you use pattern matching with when it needs to be exhausted and you know also see the hint that the IDE provides and there are two options one option is that you say else and I encourage you not to do that because this is a catch all and in case you define additional data classes on your C class then you may forget to handle those cases differently so I'm going to be explicit and I say in case it's a failure I do something else and in this case I just say print me the error message that is describing the failure on the command line again I believe the code is much improved but instead of handling nulls with the last statement we are very explicit because we have a value which refers to a seed class that seed class can be pattern matched and you have those cases that you defined in the data class where you pattern match on and where you decide what is the actual logic that you want to this is logic number one and this is logic number two so hopefully that whole code works for that I need to exit the presentation mode and start the terminal one more time and start the code one more time so hopefully that works the build is successful which looks good and also the repositories are as expected now let's see how that looks in Java by the way Mario how much more time do I have do you have around 7 minutes more time? okay good I won't be able to finish that in 7 minutes at least not what I planned to do anyway let me try to give you a hint on how it works with Java so in Java I prepared a similar situation and this method is repositories and again there is an IO exception that can be thrown and if that happens it returns null and that's not the way I expect the code to be that's why I want to change that and I do change that with a new package and I call it so the package is only there for the college repository and the package is only there because I want the code a little bit to be all that and here so I actually I wanted to show you sealed classes in Java but now I'm going to shortcut and I start with a sealed interfaces in the code on GitHub you also find the examples for sealed classes so that's an interface and what I want here is a GitHub repository and again I'm lazy I'm going to use another code that I wrote before and here let me enter the presentation mode one more time so I'm going to use the sealed keyword for the interface and then I'm going to define the sub classes in this case those are not sub classes permits in this case those are records and to define what is allowed and it's one success case I'm pretty sure you know what is the next which is now I need to define those classes and for that I need to exit the presentation mode I'm sorry for this jumping back and forth and I'm going to do again yet another cheating so I need a new Java class which is a record and it's a success oops sorry this was wrong this was wrong it needs to go here record and another record which is failing and now I need to define what is the parent class and I use the implement keyword for that and that's the parent class and I do that for the success case too and similar to the Kotlin case I define an arrow stream that holds the arrow for me I define a GH organization object that contains the GH organization now that looks promising and now I need to go back to my method to change the code so let me enter the presentation mode one more time now I'm going to return a new GitHub repository access in case this repositories returns me the list that I want and now what's wrong about this one ah it's a list it's a list let me fix that later and also what would have been an issue in case I wouldn't change this so I'm going to change the return value it's my data class in this specific case my sealed interface and also for the for the case I'm going to return similar to the Kotlin example I'm going to return a record which gets me the localized message in the arrow now the the list repository returns a list but GitHub repository does not have a list but only one organization and that must be a list of each organizations and now that needs to be imported I come on new imported please yes import class okay good that looks good and now this needs to be imported as well and now it still has this this GH repository was I wrong with with the definition that I did here yes it was wrong so that must be GH repository that needs to have a serial caller okay good now the function works at least from a compile perspective and there is still one problem because the return value is different Mario do you have three more minutes I think we can extend like a couple of minutes sure thank you now I need to react on the success and on the failure case and what is so far expected is a GH repository item from the actual library but what I provide is my own sealed interface slash there are the specific records that implement that interface so in Java you would do take those cases with an if statement and you would ask this reports which instance of is that guy and the instance would be github organization repository success and if it's a success then let's do something about this and in case it's not a success and the instance is different in this case then handle the failure oops sorry now I need to go back I clicked on the wrong thing sorry for that so this is where I wanted to click so I need to go through the failure case and now let's double check what is the arrow here ah the type is wrong so now something it doesn't make in the success case I can I can use the object retrieve wait this is strange so this was wrongly named I'm sorry for this this is really confusing now nevertheless what what happens here is that I can iterate over this and I can execute for each that I wanted that should have been called a repository and not this was my bet and in the in the failure case github organization failure there is this error message and similar to the Kotlin case I would just print that out in case an error happens and last but not least I want to tell you one thing about this Java case what is really nice about this instance operator is you don't only check for the type in this specific case for the record type you can also give it a label and that label is a handle to the instance so I can use that instance and I can call methods or methods that are defined for that and to wrap it up if you if you want to compare Java and Kotlin and you come across sealed classes from my perspective there is no major difference with JDK 17 Java really caught up especially this sealed interfaces slash records approach that Java now offers is from my perspective on par with the Kotlin approach personally I still like the Kotlin approach a little bit better because the sealed class cluster data classes they all fit nice into one file whereas in Java you will have interfaces and records in my version in different files but that's such a small difference that you can't really say one approach is better than the other in case you need to decide Kotlin or Java for a given project there are other things like how good are the developers that you work with in one or the other language are way more important the sealed class approach is at least from a conceptual point of view almost the same and I hope that code helps you to remember that and in case you can't remember please scan one more time the QR code you will find down there the GitHub repository with the code also with the sealed classes in Java that I couldn't live code with you and now I want to take questions if there is time for questions okay thank you very much yes a question one thing about the repository I think the QR code shows the correct link but the link below is like GitHub there's probably the user account missing I tried that link this is an organization so I created an extra organization but let me check afterwards if that organization is available publicly maybe I made a mistake then I'm going to think yes I think that's the issue just make it public then probably the link works perfect and there's a question from Miko in OOP one would put the dispatch into a case statement but into the object for example use the null object pattern how would you evaluate these two approaches with pro and cons for your personal coding style and the code base you find useful working with them which one is more practiced or convenient sorry the line broke also in the shared nodes look at this the brother on the left hand side is also in the shared nodes so you can read it and also Miko has another question on the chat but maybe we can get your idea about this probably Miko is right I can't verify that right now whenever I do Kotlin my personal style of Kotlin is functional functional programming in terms of pure programming so specific return values no side effects and that's why I use this when from in my mind this fits better into that functional style of programming this is a personal pro I can't really come up with objective pros and cons right now I hope my personal pro points helps you okay perfect maybe you can also check out the chat there are some other comments after waltz we are now at the end of this session I think we should definitely go deeper into this topic in future like Kotlin was promised as the future and now we find that Java is catching up very interesting to learn about it looking forward to seeing what's going on next steps with these programming languages Luta, thank you very much for joining us and giving us those insights life other people always pre-record and if there's an issue they can record again you do it live Shapo for that thank you very much for joining have a good evening hope we stay in touch thank you very much Luta thanks for having me okay and yeah we have now the next session coming up in a moment which is about Eta, the first open source 5G LTE connected Edge Cloud platform with Aris I'm giving you now the rights to join us in this session I make you also a presenter and I unlock you so Aris you should be able to activate your microphone now by clicking the headphones symbol and then clicking it again and I see already putting on the webcam awesome so Aris perfect, okay please does the microphone work for you yeah yeah awesome Aris Luzant with you Aris may I know where are you from and how do we pronounce your name correctly because I feel a bit it's a bit like Spanish influence it's a bit like Asian influence it's a bit a lot of influences and I think in English English we pronounce it in one way how do you say your full name my name is Aris and then Ristianto or how is it Cahyadi Ristianto Cahyadi Ristianto okay well always name and where do you come from where is your name from I'm from Indonesia but currently I'm in Singapore okay great and so your name is Indonesian or also some European influences there are so many influences in Indonesia right yeah yeah okay I think I have some same problem maybe I need to restart the browser ah okay no problem but you already know how to get in we wait for you thank you very much then just come back in a moment okay I will restart my browser sure we have this different screen resolutions we have the browser permissions actually right now for like probably a lot of people who are into security they know about this there are a lot of browser updates recently and these browser updates change security permissions and some people said to me oh you should do it online events anytime but not in this month because at the moment a lot of updates are happening people need to give additional permissions and so on if anyone here who's listening knows something how we can avoid any issues please let us know we're always open for it but these are the challenges of this time however we made it all happen nevertheless and I see Aries is already coming back in so we are getting to the end of the event but still a few sessions so guys stay tuned we have some other interesting sessions coming up we have different topics today security with Python and so on and now we are coming to the topic of Cloud and DevOps again so the upcoming sessions will be focusing on this topic and I unlock Aries again and make you presenter Aries so you should be able to activate yourself again and I see like quite some exchange here on the chat so this is the advantage if you are watching in a stream all is good but if you want to get in touch with the speakers I recommend to you come to the eventier website sign up and get a ticket and join us here at the session live the advantages you always have the possibility to interact directly with speakers in the chat or post the question here in the shared notes but anyway we love you guys we are happy if you join us I see like more and more people dropping in especially for your talk very nice you are perfectly prepared let's check your microphone again before we start the session you need to unmute still how about now? perfect so cool so let's start so welcome everyone to the session ATTA the first open source 5G LTE connected edge cloud platform session here the session is done by Aris I think it wasn't perfect but Aris if you can tell us your name in a moment in this session again would be nice there are some interesting facts about you Aris is interested in the future internet architecture and open source networking including SDN software defined networking function virtualization and cloud computing Aris was selected as an ambassador for the open networking foundation organization since 2018 and the ambassadors steering team as of in 2019 he is currently working as a research fellow or senior researcher at the national university of Singapore also known as NUS and assistant program director in technology at the national cyber security R&D laboratory NCL Singapore to build the future internet cloud computing and cyber security test beds so an international person here at the session Aris great having you here and we are really excited to hear more about you here we go thank you very much for joining okay okay thank you for the opportunity so I repeat my name is Aris Cahyadiris Dianto you can call me Aris okay officially I'm a research fellow at NUS national university of Singapore but today I'm re-send O&F open networking foundation to discuss about the Aether project which is first open source 5G LTE connected 8-slot platform okay before we go to the details let me give you some highlight or maybe introductions to the what is the hybrid cloud services okay this was my presentations in one of the events in epicot so in here I want to emphasize that we have slightly changes in our internet architecture as you can see here now you can say like the main internet right on the core of the internet basically is owned by a big big company like Google, Facebook, Amazon or any other OTTs and then left with very small part of the network actually is done by the internet service provider okay it's known it's already known very huge trend and efforts but the good thing is it gives you a different way to provide networking it can be global network interconnect virtual private cloud hybrid cloud networking and so on okay and then if we take a look more closely on the cloud side so it's very obvious that you will have the community cloud like Amazon Google cloud or what they say Azure and then you will have telco cloud which is usually is having by the provider and then now everybody shifting a little bit to the X cloud users to give more what they say very good user experience and then the issue is somebody say that maybe is complicated like adding some complexity but some other people say it may have to give you another what they say opportunity where we can deploy what we call hybrid cloud services okay okay now the example of hybrid cloud services I mean one of the example is 5G connected it's cloud and then if we relate with the what they say the current transformations in the digital era they call it like enabling digital transformations 5G technology so I just gave you a little bit what they say background why this 5G what they say it's like open enterprise transformations so they can use the 5G connections to pull their I don't know IT sensors networking or maybe like AI ML and AR and VR to support what we call industry for 4.0 so everybody will try to what they say compete in this what they say transformations to build the infrastructure based on the 5G okay and then you can see here some numbers that actually in I mean to do this transformations actually the connectivity software, hardware and services together okay and then the problem is how we can provide this kind of platform that can support these transformations okay of course we can always come back to the history and history may be repeat okay here in the in the old days we know the and we all know we also know the Symbian OS right and then in that time I think it's very clear that they can give very good what they say like solutions to I mean for different type of services different type of what they say innovations but the issue with that solution actually is purposefully built only working for some specific hardware and cannot be shared with others vendor and so on so what we call close and proprietary and then after that Linux and Android games right so it's began the what they say like the transformations of the operating system and it's becomes the transmissions of the mobile operating system okay and then why this to to become very powerful on the for the solutions for the transformations because put of them actually generally purpose built and then of course open source right so you can always use it and then enhance it and then keep it back to the community okay so imagine if in the what they say 5G connected as part form right so if we can have some similar solutions like Android okay that can run in many different hardware okay can be CPU it can be what they say what they say the LED can be what is the processor and all those things right and then of course some additional hardware libraries and then you have some platform okay that similar to Android for the phones and then you can also add some specific build platform okay IOT, AI or whatever and then in the top of this one you can build many many different applications and it's not fine I mean it's not really what they say dependent on the hardware itself okay so and then the most important actually this platform can be connected to the cloud so imagine if your Android phones right actually is connected to the cloud and then doing updates seamlessly storing some data automatically and so on and so on even sometimes you can push some configurations in your phones right so that you may be able to push some of the configurations from the central public cloud into the enterprise its cloud so this is the idea of the main Aether that want to achieve okay so that's why O&F as one of the foundation for open source networking so try to build platform called Aether specially for 5G connected it's called platform okay so basically the main reasons as we mentioned want to use this platform to enable 5G driven industry 4.0 initiative so the main key actually we I mean it can be provide connectivity services either 5G 5G or CPRS or any other type of connectivity okay and then what they say it should be robust similar that provide by the service providers okay and then this connectivity services actually is connected to the clouds so you can do many things that you cannot do in here you can offload some of the what they say services in here like doing the AI or machinery okay and then the last but not least actually it can be used for any different applications whether the applications is required high bandwidth, low bandwidth what they say low latency and so on it should be capable to manage this type of applications by doing the end to end slices okay and then all this surface or the solution actually is managed by the cloud managed open source platform okay so what actually what they say I mean what is the unique of this platform that want to offer to the community okay basically there are many innovations that actually what they say already done previously or currently still ongoing but in this specific platform we try to combine all together for example like the 5G, 4G, it's cloud and then cloud native it's platform end to end slicing SDN network programming verification and so on and the last but not least actually this is the more important the platforms will be able to run in the open source software and then in the you can select open source hardware okay so the benefits yeah of course it's similar that the main the main purpose of the the ether is was built okay the economics of course because we want to have platform which is what they say efficient in terms of the other cost to be built and then secure of course you need to have some some hardware that can be programmable specifically programs for your needs okay so you can remove all these functions that you may not require and then similarly you can control what they say the traffic I mean from the for your network and then of course hybrid cloud as I mentioned right so you able to deploy some of the components in the it's cloud but some of the central cloud so this is some of the benefits that actually ether want to achieve in this software okay in other view you can see the ether platform is in the middle okay try to provide these three components that we haven't discussed before and then running in the top of the white boxes it can be CPU it can be GPU it can be TPU TPU or any other type of hardware that can be easily programmable and then of course in the top you may also have the third party additional platform special for AI or IoT or anything else and then the applications should be easy for the user to be developed similar like you develop the applications in android right so you don't care about all the almost don't care about the hardware and all those things as long it can run in the platform it will be run in any hardware and then the purpose can be used in many different industries can be office for the surveillance and then the sensors in the factory or IoT and so on and so on okay so let's talk a little bit more details about the components of the Aether because we talk about the Hcloud platform we need to have these two different part of the components of Aether one is what we call Aether connected H or ACE basically it's all the physical connections that connected to the users will be here and then some of the like here is programmable switches and then it control by what we call ONOS is one of the SDN controllers and then in radio side we have what we call RIC Radio Intelligent Controller that's specifically what they say the radio communications between all the users and devices and then in the top of these two different type of controllers so you need to have these applications different applications for different purposes and then of course there is some additional applications that we can build in the top of these platform components and then imagine if you have many different sites or different ACE components so you need to have the centralized management system so that's why we control these distributed ACEs with these two different functions one is the Aether quality control and the other is Aether management platform so basically what operation that did by the AMP basically is how what operation that did by the AMP basically is how to prepare the infrastructure based on the SDN software dependent working and then how to provide the software dependent radio access cross-loop control for example if there is any attack in the system the system will be able to react and so on and so on so let's look more detail on the management platform because the AMP is placed in the cloud environment it can be in public cloud like Google, Amazon or Aether so basically it's already full on software so basically what we use in here actually all the open source software that use for different purposes so we use it for the runtime operation control like all the controllers has been used here and then for locking monitoring I think everybody knows that we have Prometheus, Grafana and GLK and then for lifecycle management I mean how you push your any changes in the platform in the software patch or even new versions and how you will push into the platform so you have a different type of tools like Jenkins, Terraform or Docker container or whatever so basically it's because it's already in the cloud so any cloud based solutions can be used to deploy this at the management platform so platform is only part from if we not really deploy in the real world so basically in the simple case we may need to have a central public cloud for the management and then we will have many sites here and then of course some controllers to control the network and then some applications need to be in place so this is the current situations of the Ether network that deploy in the worldwide and operated by ONF so imagine ONF is one of the operators of the 5G, 4G network across the globe so there is some collaborations that voluntarily deploy the Ether and then connect to the ONF platform and then in terms of the management platform is also hosted in the public cloud like Google and then there are more sites coming to become the connected page okay and then it's maintained totally similar to the cloud native applications so you have the CICT pipelines you have a cycle and so on and so on okay so there is an example of the applications that can be deployed in Ether platform basically it's doing the something like AI things in the centralized cloud by using some sensors in the connected page and then it's done in almost real time and then in terms of the management because all the components are built on the software so of course we should be able to monitor all the components performance right, connectivity load utilization and so on using web-available open source too here the example we use the Grafana and then we also use Renser to deploy the Kubernetes cluster okay so what's next this is some example how some of the organizations like what they say responding to this initiative so department of defense US gave some project to build what they say they call it like closed loop control of the network okay and then there is some some new company initiate by the ONF to focus on generating or creating the solutions or applications for the 5G for enterprise and then recently there is an announcement that the Ether is released as open source software under the progressive FHC 2.0 open source license so I mean some of the members already have what they say capability to build the platform and then they want to share with the more bigger community so they can also try the same thing okay so we are not alone so many you can say like hardware vendors operators try to help in this efforts to build this either build the testbed or even build the platform itself and then the most important actually we invite many of the different type of communities to join the efforts whether you are enterprise, whether you are ecosystem partners whether you are university campuses or maybe only open source developer so to join and contribute to this open source so if you want to test don't worry when FGIF the example of the customized box that can run the full set of the ATAR okay so and then to know how the ATAR works, how the ATAR is being developed, how the ATAR is being operated and so on so you may try this the GitHub code and then see whether you have any interest that can be what they say as a contribution to the point. Thank you, I will be happy to take some questions. Thank you very much for this talk and I see we have questions for example from Nico. Nico would you like to ask your question directly personally because I know that your microphone is working go ahead. I personally have heard the first time about Edge Clouds it sounds very interesting and just from a perspective of somebody wanting to understand this virtual thing that happens there is like right now I imagine it like the server code is split up and is running in many many different components and kind of when I walk through the town from one mass to another the server code kind of like to another is that the idea can I understand it like this? Yes, basically imagine you have I mean you are the enterprise or company right you have many branches right and then in some of your branches you have the factory and then you want to build network for each of your brands right and then all the brands will be connected so that's why you will deploy the Edge component in each branch and then you will maintain centralized from your headquarters so whether it is clear? Yeah that makes sense thank you very much I have another question too you would like to my question is like again first time I hear about this who has an interest in developing this and where is it that I would encounter that like maybe as a person who just has a smartphone on the street how does it benefit me or which companies or which services will most likely be going towards Edge cloud? Okay I think because the presentation of the everything the internet now the access part is become very small and everybody can go in and deploy the network so now all the providers or even some enterprises is battling to deploy this kind of access network the idea is how you can give the platform that everybody can deploy by yourself whether it is service provider or whether it is enterprise you have the hardware the minimal hardware and then you can put the platform into it so you can deploy it so the target basically can be service provider it can be like private enterprises that want to build the high performance you can say like it is connected cloud network Yeah and I see some more questions and maybe you would like to discuss more details here in the chat a few more minutes so definitely nice and interesting here and we have something new for some people so new things are always interesting and Aris thank you very much for sharing insights you were already here last year and we see it here again so it's very nice to keep in touch and yeah we continue to stay in touch and hope to see you around okay thank you good evening bye bye thank you bye thank you and yeah next is a talk from Ilja Wierbicki please give us a couple of minutes to share everything up and share the details here and the session will be on building scalable WordPress site on AWS so we are here in a moment and just set it quickly up so Ilja I see you are here already and I will give you the presenter rights and unlock you and give you the presenter rights so you need to click the headphones symbol twice in order to activate your microphone and then we can check how things are working and start the session in a moment hi Maya okay perfect we can hear you how about your webcam okay good try hello perfect okay very nice hello the slides okay and the slides so you have pdf or you will share pdf yes so pdf please go to the like the plus icon on the left hand side and there's an option like to share media please upload the pdf there okay loading okay perfect yeah so Ilja you're in Singapore or where are you right now in Singapore yes my short movement here just travel around Asia and I'm being like a four plus here yes exactly right so long and so how things for you how did you go through the pandemic things worked out or like let's see actually in Singapore it was quite smooth you know it was safe environment and not that many sick people and they could yeah it was okay and since I worked like current person at home I haven't seen the change yeah okay excellent so you made us okay yes and you are gonna talk about us on building scalable web personas is that something you are doing for customers or is it something yes exactly so you know I have my small consultancy in Czech Republic and it's still running we still have some clients and like one of the service we provide right now it's like cloud migration and like cloud consultancy and one of the things we work mostly in e-commerce fields and multiple clients asked for WordPress because yeah I can probably start the presentation let me introduce you quickly because a lot of people in the first Asia community know you but not everyone who is watching here today and I would like everyone to know a little bit about yourself before we dive too much in so for everyone Ilya is the co-founder of web storing you might read it already on the slide and Ilya is a solution architect in the finance and e-commerce sectors with over 15 years of experience developing complex software in teams large and small and before starting his own business he worked in Europe, Southeast Asia and North America for several multinational companies and now he has web-stoting web-stoting is an agency helping companies to create a successful online business and outside of work he also has a life of course he's a husband he's a father he enjoys sports and games and learning everything about everything so we have a curious mind who will share insights about a curious topic and something that a lot of people need Ilya thank you very much I leave you to it thank you thank you hello everyone and welcome to my presentation today we'll talk about WordPress and how we can scale them up believe me or not but WordPress is still probably number one platform if you want to build a small or medium side business or small or medium size website over the internet and last year in the end of the last year it powered up to 40% websites that's a lot it's almost like every third website you visit probably uses WordPress and WordPress itself is a framework because if you install it out of the box you have nothing but luck the power of WordPress is coming from themes and plugins and they it's an open source ecosystem and community builds like thousands and thousands of plugins and themes of course they have different quality and some of them are quite good some of them are pretty bad rubbish that can even break your website that's a sad life but we have 50 plus thousand plugins there's a lot of choice and it's also available in multiple languages and it's open sources like one of the eldest open source products in the web world I believe but you know nothing comes for free so with WordPress it's easy to keep your website open running usually what I've seen from my customers they just go to theme forest or any other marketplace by theme they like install when you look WordPress like to this website and change some text and load images and they're done you can keep the box and get the website up and running in a couple of hours if you build it from existing components but once your business will start growing and mature once traffic will come to your website you may see some particular challenges here on the screen you may see the most common problems it's like inconsistent backend so for example if you build a normal application with a react normal content and some door to Python on the backend usually you come up with an architecture you design a system and you connect the components to work smoothly in WordPress that's a different story in WordPress you have a framework and it's very basic it's almost like a user friendly interface with a database and a way to upload plugins that can call your plugins and that's it and then you start storing plugins usually major plugins they frame with themselves so the developers come with their own architectures and somehow build it on top of WordPress what comes next other developers will build plugins for the plugins awful themes following their architectures your website might look like a big giant mess if you want to see it from the code point of view but on the other side it works and it's good it needs customization so when you have 50 plus thousand plugins you definitely need to choose the correct ones and you need to install a thing, install some antispan filters, install some your plugins it's a lot of work and you need to know the ecosystem security issues it's a good and bad thing WordPress is well known for being hacked quite often that's the bad news the good news are most of the security issues are not coming from the platform itself so WordPress as of today is quite secure system because it was reviewed by hundreds of hackers and security search companies the problem are coming from plugins when you install a random plugin you will never know if it has a malware if it has any security vulnerabilities if it's falling any security best practices and designs it's a black box so that's why I always suggest to do a code review never try to install a random plugin from the internet it may have some malware inside updates are difficult to keep up with WordPress update is a major problem unfortunately since they don't have common architecture they have a plugin architecture but it changes from version to version when you update the platform usually transmutes a till the time when WordPress will decide to change something under the hood and change the way how things work because many plugins and themes they they know the WordPress code itself and they use some hacks to somehow avoid and do like some workarounds in the platform so one day you update your platform to a newer version and then magically some plugins stops working unfortunately it happens quite often that's why I always suggest to use the well known plugins like major plugins usually they pay it because you pay for support but at least you'll have some guarantee that WordPress upgrade will probably not break the ecosystem 5th problem is the page speed WordPress is slow especially if you deploy it to any like a random hosting something cheap it might be quite slow because it's a database read heavy system it comes with some learning curve yes unfortunately if you want to use a WordPress to publish a few posts it's easy if you want to build a solution on top of that like e-commerce system web store or forum or even in internet I've seen such projects in the past then you need to learn it and there's a good manual from WordPress it's called Codex but from my own experience it's not enough and you will see some pieces of information here and there some of them are quite good and deep like high level so in case you need to you want to learn WordPress I highly recommend to go and read the source code it's too much of reading honestly it has no built-in backups unfortunately it's just PHP website there's my SQL database nothing shiny out of the box and you have to take care of your backups and there are some frequent error messages in case you don't configure your PHP error output you might be in a situation that some random PHP errors will come up and your users will see it and especially it happens when developers build plugins when you use some other plugins for other PHP versions so the plugin doesn't have a good clean code now we deploy it to PHP 8 it's like a most language and once you run the website you see some random errors happening okay so let's see how cloud can help us here because we're talking about cloud we're talking about deployments unfortunately we can cloud AWS for example build a consistent backend that's your job and it will not help you with customization but it may help you to reduce risk of having security issues if you go with some application firewall options and it may help you with some updates and the way how you can provision up like test updates and provision them to production systems it definitely can help you with page speed and it also helps you back up quite a lot like AWS can help you with WordPress linker and this is your PHP coding style so if you have some problem in your code you'll have to fix it so let me show you some journey this is a journey for one of the customers I had in the past and it shows how a small company positions to like a medium to larger organization like growing WordPress and this is the evolution that you may see yourself if you think so for example if you have a small store and then accidentally you're booming then you may have the same route okay so let's say you want to open a small shop that wants to have a landing page and I don't like a contact us form where customers can go and order something like home delivery for example most of the usually if you want to deploy it on AWS you will come up with a really really simple solution you need just one easy to instance and you install everything to your easy to instance now put it to your public subnet and make it publicly available to whoever wants to see it it's a great approach and if for a small website it works perfectly and it's a bit more troublesome than hosting on when I can share hosting but on the other side you have better performance so from my experience I would suggest to use nginx and php8 if you want to deploy the website today and you have AWS instances that you may use from my experience memory optimized instances work much better when you deploy LAMP stack so you have your database server and your nginx server and your php on the same machine so I would suggest memory optimized instances that's because database likes memory so memory gives you the database now even with this approach you can do some optimization on the backups because you can go and use EBS EBS step shots and lifecycle manager in AWS you can you have EBS that's like a little drive that you attach to your instance based on all the data and you can do it in every way that every 12 hours so like every hour EBS will back it up create a copy and store it to S3 and next time we'll see if your website reaction was hacked or something that happened you can just go and restore the data back up it's two minutes work and it saves you a lot of time and you can attach to your website and in this architecture your main bottleneck is the database in the most of the cases when you're running WordPress and this architecture you're not happy with this yes you can go for like a large instance with more memory and your database is consumed so you have better performance database is the problem one of the work around is there are plugins for WordPress that do like static page pre-rendering so when you request a URL first time it will go to the database and generate HTML for you then it will store this HTML on the disk when subsequent requests will come there will be no call the database and the data will come from static HTML pre-rendered cache it's quite useful and for such simple setup it reduces a lot quite a lot since the database is a problem and when our machine crashes we lose our database and that's life the next iteration of this design will be decoupling we move the database out of WordPress instance and use AWS RDS AWS RDS it's a managed database service where you upload all DB tasks to Amazon and you just say hey Amazon I want my DB and here you go if you want to go this road which means that your business gets you generate some revenue because it will be definitely more expensive than the previous solution and I usually recommend to go with RDS multi-AZ installation because in this case you're deploying just one database server you're deploying two servers one is master, one is slave and one will be active but in case your active database server will go down like it will fail and in Amazon it happens so Amazon doesn't guarantee you that everything runs like 100% SLA so in case it will fail then the secondary instance will become primary and your database will come up back online in a couple of minutes or maybe even seconds so yeah if you have any then I would suggest to go this route again you will use your EBS snapshots and you also can do can use RDS backups because RDS comes up with its own backup for the database system so now you have like a more granular backups you can do like a I know once a day or once a week backup for your virtual machine but probably your page is not changing that often but you can do like more backups for your database but again yes here our database is still a bottleneck because no matter what you use database to render HTML and also recently not recently because probably like a one year old now AWS has come up with Graviton 2 instances those are ARM processors and I was running some testing for LAMP stack it's like Nginx with PHP 8 on Graviton and I saw the boost of performance like by 20 like 20-30% so if you're running such workloads definitely give you like try ARM and see if it will like help you if you'll get more performance for the same or even less money the third iteration of the design and this time they're trying to solve the problem of database because since it's a bottleneck we have to take care of it okay so first of all I would suggest to you if you're living in Amazon infrastructure I suggest you to migrate from my ADB on RDS to Amazon Aurora so Amazon Aurora is a proprietary database yes unfortunately they don't share a source code that's Amazon and it supports Postgres and MySQL interface so you can take your my ADB like 412 database restore it to Aurora and it will work as it was like your ADB will look like my ADB for your PHP code so Aurora brings you in some better performance it's more optimized for ADB infrastructure and it comes with the concept of ease of application so basically you can use them even with other database systems but when you go with Aurora you have more ease of application so ease of application it's the only version of your database normally when you write there is only one master node where you can write your data so all your inserts and updates operations will go to a particular node but then when you need to read content and that's what WordPress is doing WordPress is like it's a read heavy system it's not a write heavy system so when you go with read heavy system instead of going to your master node all the time you can get the same data from read Africa that will reduce a lot of your database and boost your website performance WordPress doesn't support it out of the box so you can just go to the magic checkbox but there's a plugin it's called HyperDB at least that's the one that I use and HyperDB lets you separate your PHP SQL operations from all your private operations will go to your master node and a particular IP address and all read applications will go to your master node because you just configure your IP addresses and done I haven't seen any issues on production of WordPress with that plugin so that's the database but we also have to start like have to introduce caching because even if you reduce your database load you still go to database all the time and that's the reason for sending data between your database server and your database servers so one of the caching solutions that comes on AWS is called Amazon Elastic Cache so it's nothing more than managed memcached or it is that you can use with your WordPress and WordPress I think supports both okay WordPress doesn't support your caching out of the box you need to install plugins but you can use radius or memcached plugins in my experience you get better performance when you go with memcached like memcached cluster probably because memcached is a really simple cache nothing fancy it's just a piece of memory where you can store data and read from it but on the other side since it's simple it was a bit faster than radius where you have some advanced operations that you don't need in our read heavy copy management system and finally you can start using AWS Backup it's a managed server that helps you managing your backup schedules and because before you had to go and configure your EBS volumes to take snapshots and then you have to go to RDS configure how you want your database to be backed up if you live in AWS infrastructure you can go and use take one service that wraps up all those backup operations across multiple services and it doesn't cost that much so I would definitely suggest to take a look and do simplify your life just a little bit so we more or less slope the problem of our data store but now but now the bottleneck is our main server our Nginx and PHP server because we still have only one machine and for a smaller medium website that's more than enough but in case you have some crazy marketing campaign and you're expecting to see like 100,000 users coming to your shop then you need to think about scaling up even your web server because otherwise if you're just on one server probably you will not survive like a crazy load as anything on AWS there are multiple things how you can do it so first of all as you can see on this diagram in our public subnet if you just keep a load balancer we use in AWS two main types of load balancers a network load balancer that's used mostly for TCP, IP traffic like a low level traffic and the application load balancer that is used for HTTP and HTTPS traffic in our case it's definitely like we have to use ALB because it gives you some like easy configuration and it's optimized for HTTP protocol so we will install in our public subnet load balancer and then our load balancer will go into Auto Scaling Group so Auto Scaling Group that's a concept technology in AWS where you say AWS that's an image of my operating system it's all pre-configured and in case something happens that your CPU utilization is more than 80% for 5-10 minutes let's create another server that is exact copy of Alkali server so that's what we have here and but since it's a WordPress we have a slight problem because in WordPress there's a magic folder called content in dp content you store your steam and your plugins that's most of the time you don't change often but when there's an upgrade you must upgrade so all the code changes will go to this folder but also it has some static assets so whenever you upload images many pictures media files they will go to that folder and once you manage content images you have to be sure that your vp content folder is always populated to every single server that you create automatically during your scale-up operations the goal to solution for that and that's probably the easiest one is use EFS EFS is a network file system provided by Amazon so basically it's a nfs but managed by Amazon so you can go with EFS you create it and you can put it so once you have EFS you register a new nfs volume in your Linux operating system and point your vp content to be stored there so this solution works perfectly fine but if your website is popular it happens to me so once I had a call early morning from my client and he was shocked he was like the website was up and running for almost six months and it was running smoothly no issues, good performance but they were running some marketing campaign and accidentally it was successful so a lot of traffic came and what happened was since our content especially static assets was stored on EFS there were too many IO operations on EFS and EFS when you use a general performance version it's optimized for high throughput so it works in such a way that as you can see it's too much throughput like too much IO operations it burst your high throughput for your EFS but it's only for a short period of time and you have something that's called burst credit balance so you have some credits you cannot burst for high throughput forever so once you're in a situation that too many websites are asking for hey please give me this fancy image you are burning your credits and once your credit goes to zero EFS becomes absolutely slow it's super slow and you do like LS command on this network folder and it takes seconds that's what happened to me and then I've learned that this is burst credit balance and you can pay it to Amazon to get it up and you can pay it to burst it but it may be very expensive so EFS is not cheap service EFS is actually expensive service and be quite careful if you want to use it for things like CMS and also from this diagram I added a web application firewall here so since we need to solve the problem of hackers who wants to hack us we can go and use AWS they give you some some solution for that okay so since we've been talking about this EFS problem and it happens often when your website is popular what's the solution the solution that I found for myself is I always try to move my assets to S3 if it's AWS of course so for AWS you can tell WordPress to store your static assets on S3 and moreover AWS has its own CDN it's called CloudFont so you can configure your WordPress in such a way that all assets that you upload inside your WordPress they will go to S3 and then whenever they serve to the outside world the URL that you will get will be your CloudFont and so and also you when you configure your CloudFont for such use case it will be on your front and then CloudFont will decide okay should I go to S3 and get an image file or I should go to my ALB and pull some PHP script to get executed and get HTML or something else and when you configure your CloudFont you must restrict access to your ALB in such a way that only CloudFont can request HTML from that load dancer because otherwise what may happen is let's say if you if someone wants to do like DDoS attack against you if they will do DDoS against CloudFont then probably it won't be successful because you need to take a big bite but if they somehow figure out what is your ALB and if they attack your ALB they might be successful of course AWS comes out with AWS shield product that protects it from DDoS but if you want to mitigate this risk even more I would recommend use CloudFont and then you set up your ALB in such a way that only CloudFont can do it but normally you set up a special secret HTTP header that only CloudFont knows and it sends to the ALB and again please apply your bof to CloudFont to get even like better protection this is the final design so we have here we have our web application firewall so it will try to mitigate risk of having security injections or cross-activating attacks or if you want to do like a GUIP field drain so then CloudFont will go to CloudFont that is responsible for traffic distribution and in case it's an image it will go to your images folder and otherwise it goes to your ALB and run WordPress and one comment here AWS bof product because I use like different bofs and it's quite good and it's not too expensive but again it's a bit more basic so for example you can go with CloudFont they also have a web application firewall product I see we have more rules and maybe you can replace this one with Cloudflare ok so here you can see like some basic configuration for CloudFont and you can get it from the internet or like from my slides and finally the WordPress update so as you saw in the past it's probably magic to have like all your plugins and platforms and themes that are updated and you always need to have a particular version a particular image of your operating system that you can use in your other scaling group so like how I usually do it so we have a schedule let's say every two weeks or every week depending on a project so we go and review all the plugins and changes what needs to be upgraded in the staging environment we do upgrade, we upgrade all the plugins and themes and then test the website stable and we can build an EC2 and AMI image there's a commission image in AWS infrastructure and then what we do we just use like EC2 image builder to automate those like existing packaging and patches and apply like plugin upgrades and then you have your image, your AMI and next what you do you just do like automatic upgrade so on your load balancer let's say one auto scaling group that runs your old version and then on your load balancer you create another auto scaling group with a new version of your operating system basically new WordPress and new plugins and then you just switch traffic it's quite simple in cloud and yeah so here we go here you can see some reference to the plugins I've been talking about and thank you any question? Thank you very much there is a short question but I also wonder like this question some of them come in early and it seems to me you're starting to answer already like during your session so there is one question here for example, how would you rate AWS compared to other cloud providers why is it better than Google cloud? Oh my gosh that's a holy war question I wouldn't rate any of them so from my opinion they just pop each other and again I use three cloud providers I use GCP, I use AWS and I use Azure from my experience if you go for enterprises like large organizations they usually go with AWS because it was on the market like forever and probably Azure is picking it up because Azure is quite popular GCP is still like beyond but I see them more in a startup space or medium to small size because they are cheaper especially if you go with if you use only like Kubernetes and if you build like all your info on Kubernetes Kubernetes offering from GCP may save you some I don't know the exact percentage I didn't calculate it, sorry but I see it's cheaper any experience with European cloud providers let's say plus server one and one OVH I'm not an info guy, sorry I just use digital ocean for my pet projects because it's cheap and I always know how much I pay them because it's all cloud providers whenever you go to use calculator you will pay X dollars and that's your lower bound you always pay more unfortunately so like with digital ocean when they say hey you're going to pay 10 dollars, 16 dollars, 20 dollars that's what they charge me yeah so these things one has to keep in mind well thank you very much for sharing here today great to see you again we're also planning some social events as like it seems to be opening up more and more in Singapore so we'll ping you and anyone else who we have the contact in Singapore for a social event to have a kind of follow up speaker something like that finally yeah I look forward to that okay great so I wish you all the best thanks for joining okay very cool and yeah one more speakers coming up Christian Adele and Chris I give you the rights right now unlock you, you can try to make like available here and join the session with the sound so Chris and so you need to like you know these headphones that you see on the bottom the blue button you just need to click on it and click on it again and then you should have the sound I see you have it already and just need to unmute yourself and also activate the screen okay great so I think that you are seeing me now yes we see you, where are you I'm in Barcelona Barcelona Spain it looks like the sun is shining yeah we are in 27, 22 degrees so actually the Celsius degrees yes okay so I have to say in Singapore it's very hot too in Singapore there's not the beach like it is in Barcelona in Catalonia right and yeah I just like getting the info that we still have like around over 100 people still also watching for example on Chinese channels and yeah people are watching on YouTube we're getting the numbers later so actually it is late now in Singapore but you know we're streaming around the world and yeah I think people are very looking forward to your session okay so your session will be about open source network automation in 2022 you had already a session on network automation topic at foster I mean it's a huge topic right and even two sessions we can't have enough but looking forward to your like to your follow-up here and I would like to introduce you a bit to the audience whoever hasn't seen your session maybe at foster and they can learn about it more now here and yeah about Christian so Christian is currently working as a principal architect in network automation at network to code so a lot of network everything's about network with Christian he has been working on improving network manageability and resiliency for more than 15 years up to now serving in different roles as a network reliability engineer DevOps engineer you say if there's actually such a thing I think everyone should be a DevOps engineer we need things to work right and you're also a network automation engineer Christian loves developing software to improve network operations and build network services and also contributing back to the community by open source projects and promoting knowledge sharing in Barcelona and now here also at the first Asia summit Christian thank you very much for joining us and please take all the time here that you need for your session thank you very much thank you very much Mario for your introduction to share the screen can you tell me where is the button I'm trying to find Chris and also made you presenter you can upload PDF or share your web can I can manage presentation the thing is the one I think that is the screen I have it now from that and this is the one it's loading and I guess that everyone can see it it's still loading if it's big it takes a moment but afterwards it should be fine so we can see it open source network automation in 2022 welcome thank you very much excellent so thank you very much for having me today and being the last session I would like to make it easy to digest and to move forward because the topic is a really interesting topic at least for me and I think that for everyone around but before jumping into the actual content of the session I would like to present myself and also the company that I'm working on as you said before Mario Neto2Code is a relatively small solutions company focus 100% around providing network automation solutions and within this company there is the architecture team that I'm part of and here you also see in the screen Marek and Michael that they are working together creating the different teams that are providing new solutions a holistic view about how the different components can work together what makes the company different is that we make everything as open source projects so we try to create the solutions build the solutions composing the different pieces that are available obviously we can work with multiple vendor solutions but if there is a gap there is always an option to put an open source piece and everything that we do we bring it back to the community and also we create and we sponsor projects by ourselves then we try to narrow down the topic because we actually had a network related session before myself one before what we are going to talk today here when I talk about network automation this is a really open question you can think about networking in a not really concrete way because actually we are working in different planes so if we look at the bottom of the stack we see the data plane that is where the actual packets are forwarded so one packet is coming in one port and it's going in another port this has been pretty static for a long time but since the ratio of SDN with OpenFlow and other protocols like before or just directly programming the A6 you can change the behavior of that but we are not focusing on that on top of that we also have the control plane the control plane is where traditionally the network routing protocols work so they work together they bring a distributed understanding of the network and then this representation of this state is pushed into the A6 just to change via tables the behavior of the packets where the packets are defined is the definition but the topic for today is in the upper plane in the management plane where we as engineers as architects we can define how the control plane how the data plane is going to look like so simply set is the way that you connect to the devices traditionally via a common line interface and you change the behavior being this set I think that is good for me to give context about one of the main reasons that I'm here today as I said at the beginning with the introduction I already mentioned that I have been working I have been working in networking for more than 15 years and for the first 7, 8 years my main focus was about network architecture network operations and I have been working mostly as a CLI focus engineer so I had knowledge of multiple vendors, multiple syntax and I was able to translate my knowledge into the network behavior but there was a moment that I understood with the DevOps ideas to change the way that we operate the infrastructure not only the network, the infrastructure as a whole and I changed my career I went to a company where I started as a DevOps network engineer and I remember perfectly the moment that my mind blow up and I discovered the beautiful of open source I was debugging a problem with an SNP client for a monitoring solution that was open source so we were trying to understand what was going on we understood that there was not behaving as expected and my third reaction as I was used to was okay we detected the problem we just called the vendor and at some point the solution will be solved this was my default but luckily for me side by side I had a smarter guy than me with more experience and he took the keyboard and went today into the PDV session in Python because it was a Python application and he was able to understand what was going on he was able to fix the issue so we had a working solution at the moment and finally he contributed back to the community to the AppStem project as you can imagine for myself that was used to interact with the devices with the operations in a predefined in a common line with really narrow options this completely braved my mind and this was the reason that I jumped into natural automation as a whole and the same journey that I did a lot of people is doing these days there are a lot of companies doing with them that are doing this transition from traditional network operations they are used to understand how the network should work but the way to communicate the way to enforce that has been really manual way up to date but this has been changing and open source has a big role here there are tooling as we are going to see later that help you to do this transition you can use Ansible framework to change the configuration management like you are doing for the servers this is not new this has been for a while there but we are seeing as working with multiple customers multiple projects that there is a big dilemma a big decision that you have to take one thing is just focus on the tools and specific project and just move forward with that project or try to think on your network automation strategy the way that you provide solutions to manage your network in a more overall understanding so you understand the whole picture not only how a tool solves a specific problem because maybe it is not solving the whole solution and this decision is the one that we are going to try to solve today the approach of the presentation is as simple as try to share the same framework mindset that we in the architecture team in network to code and network to code as a whole applies to all the projects that we are working on we try to focus not on the tools we focus on the different functionalities that compose a network automation solution and then when you have the solutions the functionalities you can then place the different components that can solve the problem so we have to start first on the network automation framework this is not rocket science but really really helps for everyone to understand what we are talking about and we define our framework network automation framework in these seven different components remember that we are talking about network automation so in the bottom layer should be network infrastructure for sure this network infrastructure we are going to see there could be different types but then if we go to the top we have the user the user is the one that is going to use all automation in the middle there will be different pieces the source of truth, telemetry, orchestration we are going to go one by one and in the left side we have the CICD process we have to not forget that this is network automation but this is a software development project so any tooling that we are using for developing any kind of application these days can be applied to this and we try to keep this in mind always as we work on these projects so let's try to start one by one the user interactions, really important one because everything starts here you have multiple options here and you have to select the one that solves the problem for your user maybe there is a user that wants a user interface, a graphical user interface there are options maybe there are other users that they want a programmatic interface, an API maybe they want a GraphQL API, maybe another one another one wants a dashboard just to visualize the data so it's the work of the architect when thinks about the network automation solution to understand which are the user interfaces that has to be implemented because maybe not all of them should be in the project once you have the user interfaces you have to start focusing on defining how your network should look like traditionally the way that network operations have been working is that the state of the network is the state of the network, there is no reference if the reference, if there is is in your head, so you remember how the network should look like or maybe in some diagram, in some wall but nothing more sophisticated the idea is to achieve the goal of automating your network, you have to properly define what is the intended state, what the network should look like call this the source of truth where your variables are defined about how your data how your network is going to be modeled to eventually move into the desired state, how we want the network to behave, how we want the routing protocols to be controlled, all this information should be properly defined in the source of truth when we have the source of truth because we are automating a process, you can imagine that there should be an orchestrator, a workflow orchestration in some place the future, the functionality of the orchestration is nothing else than coordinate the multiple steps of the execution, maybe there are multiple steps that have to be executed in order to fulfill a solution and has to be in the middle of everything, we are going to see in an example how this fits together then we move into what actually is more specific of network automation while the automation engine is the place where we put the different tooling that is going to help us to move the intended state that is a data model a reference into something that the network infrastructure understands because on the network infrastructure what we have are routers firewalls for multiple vendors supporting different interfaces, maybe one only supports CLI because it's a legacy device but we have to actually deal with new interfaces like GNMI or even dealing with cloud network services you have to interact with custom APIs whatever it is the automation engine is going to take the reference, the model and translate, simply a translation to move from one to something that we can actually activate then the network infrastructure is really heterogeneous we have multiple devices that are physical boxes, virtualized cloud network services, everything we should be adaptive to whatever we have and just offer the different interfaces just to be able to get an operational state and to set an operational state, so first we can take the configuration and then we have to be able to show, to communicate what's the operational state and this is what the telemetry and analytics block is going to do it's going to collect information enrich the information because information without metadata sometimes is useless when we collect information from a device because we know which device we are connecting to maybe we can enrich this data via the source of truth and maybe a part of the device we can get the different information inside more information that then when we visualize this data in a dashboard we can enrich and classify the data in a more educated way finally all this data because it's a lot of data has to be stored as we are going to see when we talk about the related open source projects the beautiful that we have here is that we are not reinventing the wheel when it's not needed obviously we have to talk within the interfaces devices have but for the rest for transferring the different messages we are going to use open source projects to store we are going to use other open source projects when I say open source projects I mean general proposed open source projects finally as I said before CICD is helping us to deploy a network automation logic that takes care of not breaking production so the same practices that we apply to the software development lifecycle we are going to apply to the networking automation obviously taking into account the different specificities of the network devices and now we are going to try to do an overview it's not exhaustive at all but we are going to try to do an overview of what the different projects in the different areas belong to and this is only taking into account open source projects we are always using this in a brownfield environment where there are some vendor solutions that we integrate but here today we only mention some relevant open source projects first an important piece the source of truth where we can define this data obviously the data has different particularities different characteristics depending if the data has to be changed super often or by a user it can be stored in different places like in GIT or maybe in relational databases here we have an example of two pretty common source of truths that are complete for data centers infrastructure management IPAM management multiple of them like netbooks and now to it there are two on multiple available solutions for source of truth that are tackling different parts of the data information that we have to store when we define this information in the source of truth what's coming next obviously someone has to change this I'm here we can offer multiple solutions in open source projects we have chat applications like matter most we can build our CLI applications in python with click in go with cobra multiple of them we also have complete IT service management like glpi or itop and to visualize we have Kibana or Grafana that can help us to get data from the telemetry storage then to orchestrate everything just to name two of them AWX, Runbeck are orchestrators that help to play with the different components that are part of this architecture here there are multiple of them because all any kind of tool that can be used to orchestrate processes can be used in the natural automation ecosystem and as I said before what makes natural automation a bit special are the different libraries that we have available to automation engine the automation engine is using first languages that I have not pictured here that is for instance ginger to render templates of configurations from the data that we have in the source of truth and then we have multiple projects like napalm, nanico page in mi, nor near scarpy, multiple applications that work in different languages to connect to the different interfaces to each network device and change the state the same device applications or libraries that work to change the state can also be used later in the telemetry part as I said before you set the state and you actually get the state but I added here just to be more specific on the place but here also you have to notice there is amsible, there is cell stack just as configuration management you can configure servers, you can configure network devices and also terraform because a part of configuration management there is also the provisioning of network services if you are running in the cloud this is also part of these features that we have to fulfill and the definition of the state is going to come from the source of truth we have in the telemetry part obviously what makes this special is the interface so how we connect to the devices because usually a lot of devices cannot install an agent so you have to do an agentless telegraph helps a lot here hosting multiple libraries to connect to the devices get the information and reach from the source of truth and finally consolidate into a time series database like Prometheus or others we also have to use Kafka because in some places we have a lot of data to transfer and to distribute to multiple receivers but there are also specific tools for network automation as for instance sugq that helps to get the information from network devices and when we talk about network monitoring we are not only talking about metrics and logs we are also talking about flows, flows of data tools like tncct or goflow can help you in order to get all this information together with the logs and the metrics and this is really important because network automation here works together with application monitoring so all the information from network servers and applications could go together to the same place and you can imagine what can make possible this convergence of information last as relevant open source projects as I said before all the things that you can use on software development lifecycle can be used here but you also have a specific tooling for networking like the simulation tool called BATFIS that help you to get a configuration and create an analytic model to represent how this network device is going to behave or you can even simply run the network operating system like a container or a virtual machine with different projects like container lab or Kubernetes network emulator, multiple of them that can help you in the way of deploying a new change on the network automation infrastructure we are talking about management you can first speed up network device push the configuration, see that the configuration is good and then you can happily deploy it to the network as I said before this is a really, really quick summary of all the different development open source projects but the best way to understand this is to see in an example a really quick example imagine that you want to implement a network using this architecture this framework we can understand that the first thing that we have is a user that we present an interface that is a chat application, matter most so he can set what's the state that he wants to change on the network what happens next is that this change in the intended state has to be pushed to the source of truth in this case we receive this change from the chat ops integration in notebook which is the state of the intended state and automatically triggers a workflow a workflow that eventually will deploy this change to the network so AWS takes care of connecting to Ansible I'm saying okay with this information that we want to change get some templates from another source of truth from GIT and render the configuration what's the configuration that we have to push to the network then thinking on making a reliable configuration we can spin up a bad fish container just to test if this configuration is syntactically correct if the outcome of the configuration is what we expect to be and when we are comfortable that we are not going to break anything it's Friday afternoon so we should not break anything it's time to deploy it's time to connect to the different networking devices could be physical devices or could be in the cloud but this is not over when we change the state what's next we have to close the loop we have to in the telemetry and analytics part we are going to collect the flows from these devices and see via feedback to the AWS are these flows that I am seeing consistent with the change on the source of truth I mean if we have created a rule that enables the flows the flows that I should be seeing should be accepted so this is the point we are changing the state observing the state and validating that if not the loop can be continuously doing until we get into a state that we are comfortable with the same that I mentioned for firewall automation here there is a quick summary of multiple real use cases that we have been solving in multiple places from managing all the great firewall companies