 Hi, thank you very much. First of all, even though I have a mic, I don't think it's really amplified well. I have a very weak voice. If I need to speak up, just wave to me already. Okay, boy, that will be a challenge. Can you hear me? So there's no amplification through a mic. I hope at least the video stream can hear me. Yeah, hi there. I'd like to give a small talk about a small project I'm involved in since July 2015 called Open Attic. Let me dive right in. I have probably way too much slides for the time I have. Let's see how far we can come. So basically, what does Open Attic do? What was the vision behind it? It started about six years ago by now. It started as so many other open source projects with somebody had to scratch his own itch because there was a problem that they needed a solution for. So they thought, well, we can do this by ourselves. They're going with it. In this case, the situation was that the company, IT Novel, where Open Attic evolved from, they were a spin-off of another company. We're doing data center operations for them and they needed storage. So as you probably are aware nowadays, storage exceeds the boundaries of hardware much faster than people can shove hard disks in data growth everywhere. So, and they needed to replace a number of proprietary storage systems and were quite surprised by the price tax that they, by the quotes that they received. So they thought, why can't we do this differently? And if you look at it, a Linux distribution nowadays gives you everything you need to set up a fully fledged storage system. You just buy cheap commodity hardware shop in lots of hard disks and you have a server that fulfills most of the common needs. So the idea was, okay, Linux by itself is good. It has everything, but you need something on top that makes it a bit more approachable, easier to manage and unified because in many cases you have administrators that might be familiar with using a UI, but they are not that familiar in the command line. So Open Attic's vision really was to give a more friendly user interface and a unified experience to managing all kinds of storage. Storage here meaning both what is usually called NAS storage, so file-based like Samba or NFS, but also block-based storage protocols, particularly iSCSI would be an example here. And later on, during the lifecycle of Open Attic, they also realized that single-server instances or even multi-node configurations can't keep up with the storage requirements. And the developers looked around and figured out that Cef might be quite a nice alternative here, which is a distributed storage system in which you just have a single server where you add more disks, but you simply throw in more servers or even complete racks if you need more storage. And Cef pretty much organizes itself to make use of the storage to ensure the redundancy and make sure it scales along with the hardware that you give it to him. So it started as an in-house project and later became an open-source product, I would call it. The idea behind it was that there was an enterprise version and a community version and that the company would then sell licenses with added support and other value on top to monetize on the software. Interestingly, that didn't really work out. So when I joined the company in July 2015, we made a number of drastic changes to how Open Attic was governed and managed and run as a project. Before that, basically the developers all worked in-house at the company and the development took place like with many proprietary products very internally faced and every once in a while they released their community version, but there wasn't really a community around it so there wasn't an infrastructure that was inviting for users to come and work with the project. So that's something that we've changed drastically. Also we got rid of the dual licensing that was in place. Back then the enterprise edition had a few additional bits on top that you would have to pay for. All of this was folded into a single code we released under the GPL and we now had, since then there's no distinction between enterprise and communities just Open Attic going forward. We also got rid of the requirement for contributors signing a contributor license agreement. So similar to CEP, basically if you contribute to Open Attic all we require is that you add a sign offline to your commit message similar to how the Linux kernel and many other open source projects do it nowadays. So the bar for contributing code is much lower nowadays and that was really noticeable by just the amount of the growth of the community that we've seen since then. We also opened up a lot of other things that used to be internal, most popular of course is the bug tracker we are based on Atlassian JIRA and we now have a publicly hosted JIRA instant that's fully open so you can really see all the issues, all roadmap planning, everything is transmitted open. You can leave comments, you can vote on it, you can submit bug reports like you would expect from any other open source project. We also changed the way of how we work on the code. We now make much more use of different code branches. We have established a process for performing pull requests and doing commenting on them. These were all things that were quite new to the Open Attic developers so we learned as we go along and it's a process that now basically there's no difference if you are paid for working in Open Attic or if you are a community contributor it's all going through the same procedures, same requirements and the same expectations. We also switched the release model, nowadays we try to come up with a new Open Attic release at least once per month, roughly every four to five weeks and we have nightly builds if you are curious so if you are looking for testing a new feature that has just been committed and you don't want to wait for the next release, just take a nightly build. With regards to feature developments, we have kind of like a train model so basically people work in parallel on features and once they are ready and they have passed the review and have passed all the tests then they will emerge into the development branch that will eventually become the next release but if a developer doesn't make it in time since we are on a monthly cycle there's not just a really long period before he has another opportunity to get his stuff merged in so that really helped accelerating the whole development process and making changes to the project. Also in the beginning many different components were managed in separate code repositories so like the documentation was in one repo, tests were in another one and integrating them and getting them aligned was always a bit of a challenge so we simply lumped all of these repos together into one single repo which now means that you could basically now can work on a feature, write the documentation, create the tests and have them all in a single branch and commit and merge them at the same time so it's much more easy to keep track and keep the stuff synchronized. A few key aspects of OpenATIC, we are well aware that we are not alone especially when it comes to storage management, there are quite a number of projects out there that do similar things that we do so we try to come up with a few cornerstones of what we would like to focus on. Primarily the goal is storage management and storage management only. You see many projects that start also doing things like managing containers or plug-ins so they are more sometimes aimed at home users that want to have an appliance somewhere in the corner that isn't just a file server but maybe also an own cloud instance or provides a bit torrent server or what have not. This is currently fully out of scope so we really focus just on managing your storage and exposing it through various protocols and that's it. Self support is something that we've added recently that's quite noticeable. Of course we have fully GPL v2, no arbitrary functional restrictions so there are a lot of free storage management systems that you can download and use but they apply some form of limitation on you for example for the amount of data that you can store on it or the amount of concurrent users or what have not and if you reach that limit here all of a sudden you need to buy a license or pay for getting over that barrier. That's not the case with OpenATIC, you are free to do with it whatever you want in what sizes you want to use it. We're based on standard Linux tools as I said most distributions provide all the frameworks and tools that you need to set up such a system by default. It's just a matter of orchestrating them and making them more accessible to the user and that's the part that we're taking on. We try hard to support multiple Linux distributions. Originally OpenATIC came from the Debian corner so we started with Debian and added Ubuntu later on. Since about two years ago we started adding RPMs for CentOS and Enterprise Linux. We added SUSE as well and this gives us an opportunity compared to some other storage management systems that sometimes are based on non-Linux operating systems. One key concern that sometimes comes up here is hardware support that most vendors have pretty solid support when it comes to providing Linux drivers in the service base but if you're getting into non-Linux but unixy operating systems the driver situation can sometimes be a bit more challenging. That usually helps us to get adoption. We don't enforce a choice of Linux distribution on you. You can basically use what you feel familiar with as the base platform and can put OpenATIC on top. Okay, what can we do so far? What's the functionality of OpenATIC like? So basically the technology consists of two separate components. The most noticeable one is the web UI. That is what you see. With OpenATIC version 2.0 that started about two and a half years ago we switched from an XJS based to an AngularJS based from web front end. So we use JavaScript libraries to make the UI visually appealing and easy to use. The back end is the other component which has a restful API. That's also a new addition in version 2 that we're working on. The former version 1.x was using XMLRPC. So the restful API makes it a bit more easier to talk with the back end. And the front end only uses the restful API. So everything that you can accomplish by the web interface can also be accomplished by calling rest API calls. With regards to storage we provide the usual suspects in its simplest form and where OpenATIC comes from that you have group these hard disks with the logical volume manager LVM into the storage pool. We also support the ZFS file system or the BATRFS file system if you prefer. So you have a basic storage unit which is the storage pool and OpenATIC can then be used to carve out volumes out of that pool based on your requirements. We support a number of file systems. As I said ZFS is one of the file systems we support. BATRFS for other use cases. So you can really choose how to configure storage for the workload at hand that you want to serve. We have the process of adding support for DRVD, the distributed replicated block device. So in a multi-note setup where you have let's say two OpenATIC instances you can configure that volume that you create on the one node will be replicated synchronously to the second node for redundancy purposes. The backend support has been in place for quite a while already and we're now in the final stretch of finishing the UI part of that as well. So that's the pool request that's really getting close to review now. We also do storage monitoring in the backend. So one of the things if you of course as I said you can just use Linux and set up a share and create a small file server by yourself but something that usually gets forgotten during the process is making sure that the storage is properly monitored and then your users become your monitoring system because they will scream once their disk runs full. OpenATIC basically automates this process. So each time you create a new volume we'll also reconfigure the monitoring framework in the background to make sure that it's being tracked and you see the utilization. And then as I said local storage is where OpenATIC comes from with the addition of SEF we are now starting to make, yeah, we want to add functionality that makes it easy to manage the SEF cluster to create new storage objects like block devices or new SEF pools. Also start doing monitoring so you get an insight of how your SEF cluster is doing. This is the functionality that we are now most actively working on at the moment. And this combined with the recent changes that I've just talked about with opening the project was something that SUSE got curious and we had a development partnership with SUSE for the entire last year basically. So we worked together with SUSE developers on advancing the SEF functionality. And in November SUSE agreed on acquiring the team at the project from IT November. We're now part of SUSE since then. But this doesn't really mean that we will now ditch support for the other distributions there are no intentions to change how the project is being run and covered. So components as I said we have on the one hand back end. As you can see we're using pretty boring technology here, bread and butter stuff. This is by intention because since we need to support multiple distributions we need to figure out okay what's the common toolset that we can use. If you start making choices that are not available on any of the distributions it will be difficult to support it over there. So the open attic back end is written in Django. It's a Python application. Usually Django is used as an application server for let's say web shops or something like that. But it turns out that the whole way how Django organizes data and how it's structured with Django models makes it a very suitable framework for something like a storage management system as well. And basically Django is the abstraction layer and underneath we are calling the regular Linux tools that an administrator will also use. For example if you create a new volume we are calling VG create or LV create, NKFS all the steps that you as an administrator would have to perform step by step to come to the same goal and are automated by open attic. For the monitoring we currently are based on Nagis or Asinga and using PNP for Nagis for the graphs which we are storing in RID file. They have a picture about that. When it comes to CEPH the current functionality is using libradar. So basically the common API that is used to talk with the CEPH cluster to obtain information or to issue administrative commands. And we know in the process of doing more than just talking to an existing CEPH cluster we would like to be able to also set up and configure and manage a cluster. And this is where Solve comes into place. Solve is a deployment automation framework. And Susie is also working on CEPH specific management functionality based on Solve. That's a project called Deepsea. And there's a talk by Jan later on in this room at 3 p.m. if you want to learn more about it. Yep, the web front as I said AngularJS bootstrap also pretty well in web development terms pretty boring stuff by now but it gets the job done. And yeah, we are working on improving the functionality and adding more every day basically. We also put a strong emphasis on testing. So each commit or each new functionality is supposed to be accompanied by a number of tests. We test on three different layers basically. We have Python unit tests where we use the jungle unit test framework. The entire application is tested through a test suite that is named Gatling that we developed ourselves in which it calls the REST API directly and automates the testing on that level. And we also have automated tests for the full web UI based on Protector Jasmine where you basically are remote controlling a web browser to simulate clicks on the UI and to check if the web UI gives you the expected results. That's the architecture from an individual point of view. So we have the jungle application in the middle. Some data is persisted in the Postgres database. If you want to set up a multi-node open attic system, the only thing that needs to be shared is the Postgres database. So if you have a second node, you connect them both to the same Postgres database and then you can use one web UI of open attic to manage your two nodes together. Since the jungle application doesn't have root privileges, we have a separate process which is called the open attic system D which should not be confused with Leonard Pettering system D. That's a coincidence. But this is a root process that communicates with the jungle application through Diverse and performs the actual shell commands that will get you to the required results like creating a volume, creating a file system, setting up a share. And you can basically take a look at the command log of system leaders to check all the commands that we're issuing to get the job done. With regards to communicating with a Ceph cluster, as I said, currently this is mostly based on Libraros, LibRBD. This is a quick overview of how monitoring takes place. Again, the system D doesn't only configure the storage itself, but it also uses Ginger and creates Nagi's configuration files based on templates. And then restarts Nagios to make sure that the new storage objects are being properly monitored. P&P for Nagi stores this information in round robin databases and then we use the backend to take out that information to visualize it. Right now, this is used with RID tool, which creates PNG graphs. For Ceph, we are also using RID tool to export JSON data, and the rendering takes place on the web UI instead of just displaying static PNGs. This is how it looks like for Ceph. It's a bit more complicated here. Since we are using the jungle application to talk to the Ceph cluster, and we have a Nagi's plug-in that sends its check queries through the jungle application. But then again, it writes the data to RID. We use the JSON export for the visualization. So what are we working on at the moment? What's going, what's cooking? Particularly, as I said, the DRBD stuff needs to get finished. This is something that we've been working on for quite a while. And one of the things we're currently missing is that we, we depend on the storage pools that we managed to be existing before. So if you want to use ZFS, you have to manually create the Z pool on the command line first before we can make use of it. Similar for LVM. Once you have that storage pool configured, you can tell Open Attic to register it, and then creating the actual volumes on top of it can be done through the UI. But that's something, of course, that we would like to change. So that's work in progress. I SCSI fiber channel functionality needs to be expanded. There's quite a lot of things that we haven't looked at yet. We track all the things that are still open in the JIRA. So we're not just tracking bugs there, but all the ideas that we have, and we try to group them into bigger stories to, to, yeah, to have useful chunks of work that somebody can take a look at. When it comes to Ceph, we, we, we just defined a few goals beforehand. We want to be able to both manage and monitor a Ceph cluster through the UI and, and give a tool that you as a Ceph administrator actually want to use. Right now, there are a few tools out there that give you sometimes a little bit of monitoring, sometimes a bit of management, but we try to come up with a solution that gives you a more rounded experience here, especially considering that a Ceph cluster can become quite large with lots of objects that we make it, or that we visualize it in a way that you're not getting overwhelmed and you only see the information that's really relevant for you at this point in time, because well, ideally Ceph is supposed to be kind of managing itself and healing itself, but you still maybe want to know about what's going on in, in the background, and very importantly, you should still be able to use the command line tools to make changes to your cluster without OpenATIC getting confused by it. That's one of the, the, the big challenges that we had to face for the, the local storage systems that we manage. We basically assume that OpenATIC is in charge of the configuration. And once you have started using OpenATIC for the storage management, well, you can make changes manually, but OpenATIC will simply overwrite them the next time if you haven't made sure that OpenATIC is aware of them. And for Ceph, we are trying harder to make sure that this is possible, so if you're using the Ceph command line tools to create, let's say, another Ceph pool or an RBD, OpenATIC needs to be aware of that. And that was a bit of a challenge, by the way, of how Django works and how it, it, it persists data and information. I wish I had more time to talk about that, but if we have time in the end, maybe if you're interested, I can share some of the ideas that we have there. So what works when it comes to Ceph? We have a, a cluster status dashboard, so you basically can see the overall cluster health, some of the performance indicators with graphs and everything. You can manage Ceph pools. You can monitor them, including erasure-coded profiles for the pools. You are able to create, rather, block devices through the web UI. You can delete them again. They are monitored. We also start looking into the infrastructure, so you have the, the OSD manage, well, it's not management yet, but you can at least see all the OSDs that are in your cluster, in what state they are in. When you're using deep CS, the backend to configure a Ceph cluster, you also get an inventory list of all the nodes that your cluster consists of, in which role they have. You can take a look at the Ceph crush map, which is the, basically, the algorithm that determines of how data is distributed in your cluster, what kind of redundancy you have configured, and how, how the data should be distributed among the various availability levels, so to say. And we also want to make it possible that you can manage multiple Ceph clusters within one open attic instance. So let's say you have a production Ceph cluster and a staging or a testing Ceph cluster, you can use one, two to manage them both. Roadmap, well, that's just a small glimpse we have quite a long, long, there is a long, long, long list of stuff that we still want to accomplish. The dashboard needs some more love, and we would like to make much more information about the Ceph cluster visible from the dashboard. We also noticed that, based on the, of the nature of Ceph, that some tasks take some time. So you, you issue a command to, to let, to trigger an action in the Ceph cluster, and it works, and it may take some time, and you have no way of, of knowing how much time it takes, but as a web application your browser can just stand still and wait, because you would run into a timeout. So, one of the things that we had to come up with is a queuing mechanism where you can simply enqueue these jobs that take longer and then make sure that you get notified once it's finished so the web application doesn't hang or you run into timeouts. Yeah, the whole part about deploying and, and remote configuration of nodes with, with salt is something that we are very closely working on with the deep-sea developers. So, as a next step you should not only be able to see all the existing nodes, but we would like to make possible for you to simply boot up a new node that registers with salt and you will see that a new node has joined and use a salt minion basically, and you could then use openatic to assign a role to that node. Let's say this is going to be a new OSD, click and then deep-sea will does its job to configure the node accordingly. More monitoring, iSCSI target management is also something that we are looking into. So basically you define one node in your cluster as an iSCSI target host in which RBD images from the self-cluster will be made available as iSCSI targets. Openatic already supports that, but only on the local node where openatic is running on. So usually if you consider the openatic node as a management node, it's usually not having the performance parameters that you would need for a full-fledged iSCSI target server. Usually that should be a bit more powerful machine, and to avoid having to install openatic on that node as well, we are now looking into using deep-sea and salt for that instead. Rados Gateway is another big construction site. The thing is that a self-cluster consists of several components and they have their own way of how they are being managed. They have their own APIs of how you need to talk to them. In the case of Rados Gateway, for example, there's a Rados Gateway admin ops API which you need to use to talk with the Gateway for creating and managing the users and the buckets and so on. So we need to develop the interface on our end to establish that communication path. And the existing functionality like the RBD management or the pool management still needs a lot of more features that we're working on. And also monitoring is one of the things that we need to expand. Right now, the expectation is that openatic and the Nagas instance runs on that node. In a distributed cluster like Ceph, this is not going to scale, so we are looking for a more lightweight approach. The current plan is that we will be using collectD for that. So each Ceph node also runs collectD, configured in a way that it just forwards the monitoring data to a central collectD instance. So you have a way to consolidate the monitoring data on one node, which will make it much easier to monitor and visualize the whole cluster status in its individual nodes. All right, I didn't dare challenging the demo gods at Foster, because network is usually something that you can't rely on. You have to live with a few screenshots, but we have a live demo that you can toy around with if you like. The links will be later this stage. This is our traditional storage management dashboard, so to say, though. So this is what you see when you're using openatic for managing traditional storage like Zumba, NFS, and so on. You can create and define the volumes that are listed over here, and for each volume we also create monitoring data and performance data that you can take a look at. It's a bit hard to see. If you click on the demo, you can toy around with this and see it in more details. One of the things that is quite interesting and is pretty unique, I haven't seen it in the other applications, first what we call our API recorder. So as I said, the web UI uses the rest API exclusively, the web UI uses the rest API exclusively to talk with the openatic backend, and sometimes you don't want to use the UI but you want to automate a certain task in a script or something through the openatic rest API. So instead of having to look up the documentation for the API, you basically enable the API recorder in the UI and you click through the task that you want to accomplish once and then you stop the API recorder and it will automatically create a small Python script snippet that basically includes all the rest API calls that you have performed. So you can use these as a snippet or template to embed in your application to get the same or to repeat this particular task. This is the Ceph cluster dashboard. As you can see, we're using a different graphing engine here. This way we are extracting the data from the run Robin database through JSON and then use JavaScript libraries to visualize it which makes it much easier and much more dynamic to work with the data on the UI. The dashboard is fully configurable so you can resize and rearrange those widgets. You can have multiple dashboards and they are stored with your user profile. So if another administrator locks in, he can set up a dashboard by his means and doesn't have to take over what you have configured basically. You can also mix UI elements from both the traditional side or the Ceph cluster side or if you have multiple Ceph clusters you could create one dashboard that shows you the overall view for both clusters in one page so you can really tweak it to your liking. Ceph pull list, as you see we are always using the same UI elements with the data table on top and then the graphs underneath. One thing that I have on my wish list is that I would like to make it possible that these graphs that currently belong to a certain Ceph pull could also be taken and pinned onto the front dashboard. So you, if you have a certain pull that you want to monitor more closely it should be possible to drag it on the front dashboard and make it visible there. Yeah, Ceph pull creation, some of the features that we support here, boring. I'll skip all that. RBD, these are the block device lists. Now, I think the pull request is almost done that you will also see the utilization of the RBDs here. OST, yep, it's repeating. As I said, screenshots are not as exciting as a live demo but my past experiences at Foster was that the network usually works by the time you're about to head home. So, oh that's the crush map editor, as I said. Basically you see visualization of the topology and you are able to drag notes around it. You can add new notes, change the properties here. And with that, I'm already at my link list. These are some of the resources that you can take a look on. We have a Google group for discussion that serves as our mailing list slash forum if you want to get in touch. We are on hash open attic on free notes as well. So, come over there if you have questions and suggestions. Most of the discussion really happens on Bitbucket in the form of the pull request. There's a lot of communication between the developers working on the code. And then of course on our bug tracker. So, yeah, these are the key resources to get in touch with us first. And with that, I'm a bit ahead of my time. Amazing. So, if you have questions we still have time for that. I know it's after lunch. Okay, there's a question. When is software ever ready? When is software ever ready? Attic 2.0 is out. And based on all the testing that we do, we are pretty confident that each release that we publish is safe to use. The good thing about open attic especially if you use it for traditional storage management even though if open attic crashes the actual serving of data is performed by other subsystems of the operating system like the Samba server like kernel NFS. We are not really in the path of serving the data to the user. So, even if open attic has a problem and crashes which doesn't really happen that often we are not messing with your data directly unless you really accidentally delete something like that or so. But we are still of course in the process of adding more functionality with each release. As I said, we have the train model. So, what we have out right now is ready to use and can be used with confidence. But as I said, we still have a lot of gaps to fill and of course we would like you to encourage to give it a try and help us gathering guidance of where we should focus on next. So, we think that we have now come to a point where we provide a good set of useful functionality. We are aware we are not fully there yet compared to other projects but we would like to get your feedback on what your use cases are and what you are looking for or what we should be focusing on basically. There was another question here. So, the question was if we have any plans to support Kaboros for authentication. The thing is, are you talking about using it for authenticating users to the web front end? And the answer is that should work. I haven't tested it personally but since we are using Django, Django is capable of using external authentication mechanisms. So, it's pretty pluggable. As far as I know, you can, for example, use PAM, the Pluggable Authentification Modules that the Linux operating system supports. So, if you configure Django to use PAM for your users, OpenEdit will honor that and it should work with it. There's a question over there. How do we deal with different set of versions? How do we deal with different set of versions? Currently, we don't. We say you need to use Jewel. Sorry. No further questions? Last chance. Okay. Thank you very much for your attention.