 All right, we're about to get started. The next session is by Dennis. He's from the Naturalis Museum in Leiden. Is Leiden? Yeah, in Leiden. And he will talk to us about the infrastructure they've built there. So enjoy. Yes, hello everyone. This talk is about something entirely different about dinosaurs. And let's start with an introduction. Behind me you see Trix, which is an actual teorex that lived about 66 million years ago. And she's now one of the main attractions in our Natural History Museum in Leiden. Also introducing myself a bit. I'm living and working in Leiden as well. I put it like, I identify as a Homo sapiens, a meager 38 years of age. And I moved to Leiden to study political science and I ended up as a member of the IT crowd at Naturalis. Couple of them are sitting over there as well. In my free time I work as a volunteer in a social center called Freiplaz Leiden. So if you happen to be in Leiden, please come and visit us or contact me if you want to organize an open source event, a hackathon or something. And talking about open source, I'm an open source enthusiast since ever since installing Debian for the first time in 2004 already. And I've been coming to Fosslem ever since 2012. So this is my ninth edition with my first talk here. So that's quite nice. And it's always been really inspiring, overwhelming. Sometimes and for sure exhausting experience, two days of talks and talks and information. And as a result, the last couple of years, we've applied all kinds of tools and practices at Naturalis, inspired by talks at Fosslem. And I think that's also a big part of the job as IT operator is to actually try and to determine the right approach, the right tools and how to use them best for your job. And there's really a lot like, because of the 800 talks or something here at Fosslem. So in this talk, I want to present a kind of a real world use case. I skim through the program and a lot of developers are talking about what they made for me, for example, as an operator. But this is the other perspective and I hope it's useful. And I like to give you a glimpse of the way we dealt with the challenge of applying all those wonderful, powerful open source tools to a to a domain that has been up until recently really fixated on proprietary solutions. And the goal of this talk is not to try and paint as if we have the ideals universal solution or something. It's just, I hope it's interesting for you guys and girls to see how we managed with the problems of building a museum. And we're actually quite proud of course of what we achieved with our implementation, but it's really far from perfect. And I want to basically, after expanding a bit about the use case, show, tell a bit more about the circumstances we had to work in, the approach we took, the things we achieved, the end result and at the end I'll have some closing comments. So what was actually the use case? We were asked to deploy and manage an entirely new natural history museum consisting of 10 exhibitions and experiences with all kinds of technologies like media players, projectors, microcontrollers, interactives, all that kind of stuff. A campus network, because it was a new building and all the management tools around that. And to get a bit of an idea about the circumstances, a bit about our institute, Naturalis Biodiversity Center, first and foremost where the manager of the National Natural History Collection. We have 40 million specimens stored in a big tower and also a new part of the building with lots of old artifacts, small insects, all the elephants, anything you can think of. Apart from that, we are a research institute as well. So like we have at least 100 researchers doing all kinds of research related to biodiversity and we're a natural history museum which is really popular with families and kids. But that poses a really fundamental challenge for us as a support organization because basically the biodiversity in our institute is kind of the central theme and basically anything people can think of what they can do related to biodiversity, they do. And we have to support that. So building a museum, having a cloud for researchers doing their analysis, all that kind of stuff. So that is difficult to do everything really well. Starting the project and start of the project, we already had quite a bit of technical expertise in-house. So we have like an IT department of 30, 35 people with operators, developers, support. And the operators were relatively well first in conflict management already. We used Puppet and Foreman specifically for deploying web services. We have infrastructure based on OpenStack and SEV. We've done some experiments, so to say, on Kubernetes. We also canceled these. And for example, we've done analytics based on Senzu, the Elksteg, Grafana. So that's also not normal for a regular museum, doesn't have this IT stuff in the Netherlands at least. As I mentioned, we built a new museum and what you can see here is the museum actually being constructed. The part on the left, that's the new part. And that's also the museum part. It's completely new. And when actually running the project, this wasn't ready. So we had to start building stuff when we couldn't actually access the building. Or when we, it wasn't even completely finished when we started building stuff in the building. And apart from that, we had to work together, oh, we worked together with an internal museum department which was really used to working with suppliers that just were fixated on proprietary solutions, media players, all kinds of show controllers. And in general, the museum building industry, if you can call it that, is, well, on the positive note, it's just starting to be influenced by best practices from IT and DevOps. I put their kettle versus pads. It's like a big question mark for them that they didn't even consider. It's like everything in a museum is considered like this special thing. And they basically are used to just making that thing. And then, well, here you have it and you maintain it or something. So also in a museum, you have to deal with a broad set of technologies. As I mentioned, audio, video players, unity games in this instance, show controllers, K-Nex, gateways, microcontrollers, all kinds of stuff. And of course, with a tight schedule, also known as no proper time for testing. We used to have a testing phase, but it got squashed and basically we didn't have any testing phase. So in similar situations in other museums, basically a museum would hire an external company or several companies who would then build and deliver a infrastructure. If you're lucky, according to, you could set requirements as an internal support organization. And then if you're actually really lucky, they would deliver something that integrates with the other things you do. So in an effort to keep the diversity of technologies that we have to manage to keep that down and limited, our approach was to actually build on the existing infrastructure and know how within the organization and to get involved really early in the process. We had a bit of a struggle to keep, to get that message across internally. Yeah, so there was quite a bit of politics involved, but we just hold on to our ideas. And as a publicly funded institute, I think that's also really important. And just believing in the power of free and open source software, our aim was to use as much open source components as feasible and to combine this with a infra as code and DevOps practice. So ideally our ideal was that basically every variable that actually determines the workings of the museum, we would have under control, under version control and have it managed. That was our ideal to make everything like deployments repeatable. So quite early in the process we made this kind of architecture diagram to make a bit of an overview. I'll go over it quickly. The, basically the issue was that on the top you see different groups. These are not the visitors, but the users of the management or the technical infrastructure, so to say, but also those are a diverse group. And on the bottom you see, let's say an illustration of the diversity of all the equipment we had to manage. The blue line involves quite a bit of open source management tools. I won't go into too much details about those. We use GitLab for version control, Metamose for kind of chat ops. You go for documentation. Next cloud for content management, Senseim Prometheus for monitoring, and also we have to use stop disks for some reason. And of course in the middle of it all is Ansible ADWX. It's kind of, basically the design is put Ansible in the center and make it like the lingua franca of our automation. And this diagram didn't really change much during the project. It's kind of filling in the boxes. So it was a useful overview also to explain to others in the organization, okay, this is roughly what we're gonna build. And also the choice for Ansible. We didn't have too much experience with Ansible at the start of the project, but we have to, I'm gonna admit something here. We have worked for another museum in Leiden, Museum Boerhaven, and that was bit of our guinea pig, testing ground, I'm not sure if they're watching, but basically they asked us to help out with their new exhibition, but there we got involved really late in the process. So basically there was already a supplier and had already installed Ubuntu Linux on computers by hand, and then we had to deploy the applications. So what we did was just make a simple inventory of all those computers, and with some Ansible playbooks, we could actually manage that part. And I think that's an important feature. Although we didn't have control on the whole situation, we could manage quite well that specific thing, and Ansible allowed us to do that. It wasn't an or nothing situation. And the choice for Ansible also was that it was really the most popular config management tool for network automation. So the promise of being able to use the same config management tool for basically across the entire spectrum of the museum, and maybe even the rest of our infrastructure that was really appealing for us. So I'm now gonna try to give you an impression of how far we got with that. So starting with deployment. We have the network switches. We use Cumulus Linux for the switches. So those are the ideas that you have white box switches like servers, where you can install Linux on the switches. We use a process for deploying Cumulus Linux on those switches based on the only bootloader and ZTP like zero touch provisioning to get Cumulus Linux on those switches. But as you remember, we didn't have access to the building. And we also didn't have switches. So we started testing on virtual machines because it's just Debian basically. We could just build the entire campus network in a virtual environment for computers. So we may have small form factor computers for all our interactive media players. We deploy those with Ubuntu mass metal as a service. So we can, based on Ansible scripts from AWX, we can commission and deploy computers from scratch. Config management. So for the switches, we have some base roles that are shared between all the switches. And then we use template, templating Ansible templates for the spines and the core switches and for the leaf switches. A lot of vendors have their specific Ansible modules. With Cumulus Linux, we could just use templates. The same for our computers. So we have several base roles like for all the interactive computers, museum computers. And then on top of them, we would deploy games in basically we all run Unity 3D games because they don't make anything else. MPV for media players and a Chromium-based digital signage. But also we do content provisioning. Well, not on the switches, but on the computers. We selected MaxCloud. So we would have a place for our content providers, software providers to put the content on. And then with a small script, we have idempotent content updates on the computers. But based on that, we do also some kind of orchestration workflows. For example, that in one workflow, we would configure a network port for a specific device in the museum and then actually deploy the computer from scratch, Ubuntu Linux, all the specific role of that computer, the content of it, and at the end of the process, you would have a functioning thing. But yeah, we got started with automation. And then we thought, okay, we have also microcontrollers running Arduino. So we selected platform IO to actually deploy the firmware on the microcontrollers as well. So we typically would have a computer with an Arduino microcontroller connected to it, mostly USB-based. And then basically as part of the deployment, it would also deploy the specific firmware for that setup. We manage projectors. Unfortunately, the suppliers of the projectors don't have open firmware or something like that on the projectors. But so we didn't implement that. But we did some basics, like the network configuration of those things, sends you checks on it, as a part of the workflows turning on and off the projectors when starting an exhibition, stuff like that. And also, for example, we have a KNX gateway to actually stop or start the power supply in a exhibition, stuff like that. And we actually did that by mapping KNX data points or addresses to Ansible hosts. So you can use the groupings in Ansible just like you're used to. So as you can see, it's already quite a bit of scope. And the nice thing is we use AWX Ansible Tower to actually delegate this whole package to personnel who isn't experienced in all this automation stuff. So for example, turning on the entire museum is done from AWX by someone who works for the security every morning. And we can also just schedule it, redeploy something entirely from scratch. It's six in the morning before the museum starts. We can do that. So, and this is kind of the end result. So we have, this is one of the exhibitions about geology. We have a Ice Age exhibition with games and microcontrollers, projections based on MPV and plenty more. So to wrap things up, I think in general, I think what we learned at least is that our specific local circumstances are really, really vital to, for the choices you make technical and organizational choices we make. For us, a kind of relative, simple and first aid tool is really ideal for our organization like us. It has to support a really wide range of services. And also Ansible is really suitable for, and forgiving for imperfect environments. So you don't have to fit your entire world into kind of the paradigm of the tool. Ansible is so like forgiving and simple that it can work in an imperfect environment as well. And of course, although we did not succeed using open source tools for every aspect of the museum, I think we came pretty far. So to conclude, I think dinosaurs are definitely doomed because even in these kind of challenging circumstances, like tide schedule, a sector that's not really happy to change their ways in proprietary solutions, I think RK shows that it's even in those circumstances it's possible to give the proverbial dinosaurs of your industry the boots. So if you want to know more, we can have a chat. There's two more presentations about the museum. Tomorrow I'll give a lightning talk about our usage of MPV. On Monday, I do a talk on conflict management camp. I'll go a bit. It's like the sequel of the dinosaurs are doomed to one. And I'll go a bit more into detail about our workflows. You can come and visit us in the museum or check out our code. Thank you.