 I just started doing some Ubuntu stuff in the late 2000s, and I've been working deeply or directly with Debian for about six months now. So definitely a newbie in all of this. But we have a purpose and a reason for doing this, and it's to support HP's latest cloud product called HP OpenStack Helion. That's the formal marketing name for it, and there's a couple of different flavors of that. This slide is supposed to show the product delivery ecosystem. And we need to talk about the cloud product just a little bit to understand why Linux has become so important to us. The Helion product is delivered as two separate additions. One is kind of a do-it-yourself, download free, no support. Make those kind of experiments and give it a try and a run on its own. And then we have a for-sale version that comes with full support, installation, all that kind of thing. The intent is to deliver an entire stack from the bottom where we would have hardware, our servers, our networking, our storage, that kind of thing, all the way up through all the solutions, the consulting, the installation, all that kind of help. So that's what this slide is supposed to be showing in the vertical direction, from all the way at the bottom of the hardware, up to the top with guest operating systems. Now, we have lots of cloud developers. We have lots of people playing in the open stack space. And that's where they like to concentrate. They want to, their idea of low level is Python. They don't want to go below that. They want to stay up in the cloud and the VMs and the virtual networking and all that stuff. And that's good. We need people like that. That's where we have to have those people take the direction that we want to go with this. However, everyone in this room knows for sure that somewhere there's Linux running. Somewhere there's real hardware. Somewhere there's electrons and spinning disks and things like that. Linux matters. Our group was formed to take care of the fact that Linux really does matter. So hopefully that's the last we'll talk a lot about open stack. If you've looked through the diagram there a little bit, you can see sort of towards the bottom just above the hardware is this thing called HLinux, which is going to advance or not. There we go. We came up with an implementation that we were calling HLinux. We're in this for a long time. One of the tenets of the cloud is for a large company like HP, we can never turn this off. Once we've started taking on data and taking on customers and doing things, we can't change our minds in three years and say, just kidding, we didn't really want to do this after all. We're in this for the long run and we have to make decisions based on supporting those long run choices. So we've gone with our own Linux and HP Linux for two major reasons. One is the business side of the equation. Why have our own Linux? Well, if you remember back to that stack picture, there's a couple of major segments to it. We have a lot of influence on open stack. We have a lot of the project team leaders, half a dozen project team leaders and several dozen core reviewers and lots of influence in the open stack space. We've made our commitment there. We believe that's the way to go. Our current public cloud is based heavily on open stack. So we've got the history to understand that that's a good decision for us to make. We control the HP value add. It's not just open stack and healing on. There's a lot of HP customized things that allow cloud operators, whether they're private cloud, public cloud or hybrid, to use our product to their advantage. All the way at the bottom, we control the hardware. We've got the servers, the networking stuff, the switches, the storage, all the things that it takes to actually support the cloud because there is the physical side. But one part where we do not have equivalent control over our own destiny is in the actual operating system, the real operating system that's there. We are currently using another Debian derivative that is more of a co-op petitive. Co-op petition rings pretty good. Co-op petitive is hard. Arrangement with them, they have their own cloud aspirations and the things that they want to do. We're in this for the long run. We're not sure that we can guarantee or control the level of cooperation we can get from these people over that long run. Nothing evil, nothing bad. But we want a similar level of control that we have in the other three areas. That's why we want our own Linux distribution. It restores that piece of control that we need over the entire equation and now we can go forward with confidence. The other side of it is technical drivers. Why did we choose Debian? There are plenty of choices out there for a base to do a distribution from. We chose Debian for a couple of reasons. One is directives from on high that our cloud solution will be developed in open source. The final solution will be open source except for the HP value add stuff at the very top. But the foundation, the under layers, everything talking to the hardware is going to be open source. Our current HP public cloud is based on a Debian derivative. So we've got lots of experience in the mostly Debian space. HP does have a long and comfortable and good record of working with Debian and the Debian community. We want to leverage off of that. And actually we want to improve where we've been for the last couple of years and restore some of the things that were very, very true say 10 years ago. And Debian is open enough. It was there, it's available. There's things that we can do to it if we need to for our needs. And then there's ways we can give that back to the community in a very comfortable and supportive fashion. So that's where we're headed is our own Debian based distro. Then there's time to market. A lot of these plans were put into place quite a while ago. As I said, I've only been diving into Debian for about six months. That's when our group was formed. But there were already schedules set up to achieve these type of things with our own distro. And they were pretty short. There was a story recently about a train operators back in the 1960s in New York state that were looking for ways to speed up their schedules, get more trains on the tracks, get more passengers to destinations. And the big rigs asked, how fast can our trains go? We need to make our trains go faster. What's the maximum speed? By the way, you only have a month to figure it out. And this is what they came up with. This is exactly what they did. They strapped some jet engines onto the train and ran it up the speed. And it still holds the world's record for the fastest self-propelled train. That's engineering I like. The only thing I don't approve of is that they wore helmets during the test. What was that for? Not going to help. You're either going to succeed wildly or fail wildly. The helmets won't make a difference. So along those lines, we've charged ahead blindly, boldly, not knowing any better. And this is what we've come up with. We call it H Linux. And there's a secret to that, and I'll let you in on it later. It's composed of three main parts. One is Debian.org. The user space comes from Debian.org. We chose testing. So we drive everything from testing. We update it frequently. We're doing two architectures. AMD64, obviously. Why? I386, it feels like a smart thing to be doing from the get-go. We might change our mind on that, but it seemed like it would be easier to take it out later than add it later. So we started with that now. That ends up being about 65,000 packages in our full repo. And HP has this internal process called the open source review board where we have to examine the licenses and the contents of every source package to make sure that we're not in violation of some GPL or an Athena license or the Afaro license, things like that. So we have a dedicated legal group that inspects all this stuff. When we start from Debian as the source, that job is greatly simplified. It's very clear and very open. The disclosure is there. We don't use the kernel that we find in Debian testing. We wanted something much later. So we go straight to kernel.org and get essentially a month or two ago's kernel and track that continuously through time. Obviously, we have our own custom.config that we're tweaking as we go along. The idea being to best support OpenStack and Helion on HP hardware, but not exclusively that. Finally, there's foreign packages. There are some Debian packages and non-Debian sets of things out there that OpenStack needs to run. We carry those as a one-off support effort on our part. Our goal would be to take that and work with the upstream owners or maintainers such as they are and help them get it into Debian testing so that it's not a one-off for us anymore. It's driven by the community. But those things that we mash up call HLinux. We roll that. And that's available. All 65,000 packages are available for the development group to use. Turns out they only need about 600 of them. We're not putting a graphic environment on this. This is mostly server-type code. So those 600 packages are the ones that we're really, really concerned with. For Helion, we're using TripleO from OpenStack. OpenStack on OpenStack for the deployment and development process. That is, the manufacturing is centered around something called Disk Image Builder. It's a name of a program or Dib. It's essentially a smart wrapper around Deb Bootstrap. So Deb Bootstrap does what it does and then Disk Image Builder pulls down extra things, does some other integration, manages the configuration, does all that kind of stuff. And what they end up with then is something that they can run down through more integration. Helion does not use the OpenStack bits in Debian. They're using them directly from OpenStack.org. So they're much tighter to upstream, kind of like we're doing with testing and kernel.org. They also pull things from PyPy and some other sources out on the net. There's some HP value-added code, or lots of HP value-added code, and then configuring all of that together, that's where they end up with that 600 or so packages that they're actually using from the HLinux repo. We get that information back as to what they're using so that we can do lots of repo manipulations calculations, resolve some issues. One of the things we can do very easily is this OSRB and security review of the packages that you're using, what's the license in each one, or which one of them have some kind of security notification active against them. And we can advise them, change this package, try something different. Did you know this has a security hole in it? We can start doing product comparison and see what we're using and how we're using it. Where should we focus our efforts? Where should we invest in the open source community? What packages should we be worried about? Can we try to predict the stability or the bugability? I think I heard the word the other day. Where can we go with this? And then finally, feed that back into the repo that we deliver. How often do we do this? What pieces do we deliver? What else can we do? So it's a very active loop. We transform it a little bit along the way. We're trying to keep it simple so we don't have a full offering of everything. None of the other flavors are there. We're just working with testing. And then we do some branding or rebranding. We were a little bit concerned about some of these packages escaping into the wild, if you will, that a DEB package would end up on somebody else's system that really came from us and that we had touched and maybe done something to. So we changed the branding to make sure that it's blatantly obvious. No, this particular package came from HP. We don't know where we got it, but it came from HP. We're not going to blame Debian for any problems we have with this. So we hope that's what will happen. We've done a couple of reference ISOs for people to play with and get some practical feel for what's in HLinux. It's not really part of our original design intent or product deliverables, but they're starting to gain attraction. And as has been discussed in some other forms here, the installer takes a little bit of hand holding and getting things to work. It's absolutely getting better. We've got a guy who's working on that and will be delivering some custom ISOs to different internal customers. So this is kind of a rough flow of exactly how we do it. As I said, we've got three major sources. Go from Debian testing from an internal mirror that we maintain inside of HP. Kernel.org, we take the N-1 kernel or one from a couple of months ago. And then these foreign packages from different sources whether they're from Debian unstable, we consider that to be a foreign package because it's outside of Debian testing. So anything outside of Debian testing is a foreign package to us. There's some very one-off packages, some tar balls, things like that. We would love to drive those to zero, but that's not practical right this minute. We take all three of those and kind of mash them up into a giant pool or a prototype pool and do rebranding in place on that pool. Push that over to the next chunk of the pipeline and that's where we take the pool and run all the standard Debian tools on it to create the disks, the reference files, do the GPG signing, all those type of things. Got two big explosions here on validation and testing. I'll talk about that in just a minute. Once we believe we have a good repo that looks and smells like a Debian network repo, supplies the packages we need for the use cases that we have, which is HP Helion, then we'll make it available internally only on different sites and we encourage other sites within HP to cache that or mirror it themselves because we've got kind of a worldwide effort going on this. The other thing that we host here is archives and I'll talk about that in a minute, long-term archives of every one of these repo roles and we have the ability to do one-off repos or special repos or custom lists of packages for people who need it and that's all hosted on the tail end here. One of the goals was to chase CI, CD, Agile, DevOps style repo as best we could. We don't really have a version, a solid version. We are starting to establish a regular release cadence but for us, we look at the top of the tree and how can we copy the latest changes out of testing, kernel.org and these other packages and keep them fresh and available and ready to go. We did change the nomenclature a little bit. We don't call it testing to avoid confusion. We call the release that we have cattle prod and you'll see some of us running around with cattle prod hats or if you're wearing a cattle prod hat, let's raise your hand here. We've got some more HP people in here and I'll introduce them in a minute. It was mainly to be not testing because number one, it's an overloaded word. Number two, it's too much of an implication that if there's a problem with this, you go to the Debian community. No, if there's a problem, you go to us and we will learn how to work with the Debian community to resolve those problems. So we needed that different nomenclature. We do these regular roles. We always save the old packages, create these archives, generate new indices and just kind of roll our top of tree through time and keep the roll points and make them available to the development teams. When you start looking at managers and schedules and paying customers, their excitement about continuous integration and continuous deployment starts falling off. They want to see the June 2014 release. They want a frozen point in time. So we've come up with a mechanism to bridge the gap between CICD, which developers love, and frozen points in time, which customers love. And I'll talk about that in just a little bit also. But these frozen points also allow us to tell the different development teams, choose what you want, choose what works for you. We do this every two weeks or so. You can find one in there that will make you happy and will work with you on how to upgrade individual packages as needed and keep that flow for you as a legacy archive. Mention testing a couple of times. Two of the gentlemen here, Bill and Cameron, who are here all week and catch their act at the Marriott City Center, are working on our testing framework. The idea is that we have a framework which is independent of any other product that really supports remote execution. Because internally, I have no idea what that said. I hope it wasn't kick me. Internally, we have a lot of hardware partners that want to do qualification testing on HLinux. But we don't want to have 30 pallets of equipment showing up to the Fort Collins, Colorado site all the time and run all that. So we want to be able to drive these remote tests with our help. And we can test hardware remotely to do qualification cycles so that we can fulfill the stack. So what TARDIS does is support that remote execution. And we're in the early days of being able to do things with it. The remote execution works. The framework works. It can install and deploy on bare metal virtual machines, Linux containers, any type of abstracted hardware that's targeted to run HLinux in a production setting. Do tests, record tests, conditional tests, all the type of stuff you'd expect from a test framework, but in a very controlled fashion with remote execution. We're looking long term at how do we integrate the things we find in HLinux back with Debian and the bug tracking and the systems there. So long term, we have lots of goals for this. Short term, we're just trying to get it running just like we did with the repo. Finally, there's policies because now that we have this up and running and people are using it, they say, well, when you're going to roll it and what kernel's in it and how do I get this in and where do I go with this bug and I've got this favorite package that I've got to have in and all the types of things that come along with owning a repo. We have the same headaches, hopefully on a much smaller scale but I'm preaching to acquire here. Sure, we would love to have consensus and have everyone arrive at a nice comfortable decision. We don't have the time for a consensus to develop all the time. Sometimes somebody just has to make a damn decision. We make the decision and for the most part, everyone gets along with it pretty well in a very short amount of time. I know this is starting to eat into the official release time for dinner. I've got a few more things on a little deeper details but are there any questions, clarification questions on what I've gone over so far? Because I'm ready to keep going if we're all good here. Just wanted to make sure, yes. The question is, what are we using to manage all of our repos at the different stages through that pipeline? We're using apt mirror to pull in from the original Debian testing mirror that we have on site and from there on, it's custom code, special sauce that does a lot of the control flow like the rebranding, opening up the packages and doing things. That's stuff we've used but mostly it's gluing together standard Debian tools like when we finally get to generating the indices in the archive and all that's FTP archive and scan packages and all those standard tools. So the true grunt work that produces the Debian compatible stuff is Debian tool sets but the orchestration is custom. Any other questions right now? Yes. As an outsider, I have the sense that it would be helpful I'm making a proposal that a group of folks could meet with you, Rocky, to talk about what giving back to Debian could look like in terms of a workflow. We would love to have many meetings like that, yes. So just on that, on Saturday, there's the derivatives on Saturday, there's the derivatives on Saturday, there's the derivatives buff. I would encourage you to come to that and meet the other derivatives and Debian people working on derivatives stuff. I think we have someone who's planning to attend that. All right, are we good? Excuse me. I mentioned that we maintained archives of every release role that we do. So over here on the left, I don't know if you can read it exactly or not, our time stamped top level views of a repo with the disks in a pool and two weeks later, another disk in a pool and two weeks later, another disk in a pool, that's about 120 gigabytes of pop and it adds up fast. So we really do want to keep this stuff forever. Ten years from now, we do want you to be able to get the March 2014 release. We don't know why. It seems like a good idea though and we make disks. Still, we don't make that many disks. If we had done this all the way up from March until now, we'd be well over two terabytes if we did that. So we came up with a way to do data deduplication. Essentially, we copy all the pools into an ocean. Common directory, because when revisions change, file names change. So that original one that was laid down in March can stay there forever and if that was the only package that changed then that's the only file that's going to get added to our ocean. The rate we do the repo rolls, we get about a thousand packages changing or added, essentially. Each time we do this. So we have this giant ocean of all packages ever seen organized the normal way and this idea of a package management database is more of a concept than an actual database at the moment. We're in the process of building out and flushing it out, but its basic function is to map file names and revisions to the actual file and where it's located in the ocean. Once we have that information I've got my ocean over here like I had on the other page and this mapping database. I can take something called a playlist which is just a list of packages and their revisions. It's a very, very simple idea. It's a text file. It can exist in several different forms, but I can take that and generate a custom repo. We call them hot rods. Hot repos on demand. Because knowing what the files and revisions are that you want and where they are in the ocean, I can quickly construct a pool directory that's made out of sim links and not the actual files. Once I have that pool directory I can run the standard index generation tools and now I have this repo representing whatever the playlist was to start with. For the archives it's the full 65,000 packages. For our group that does storage testing it's the 100 or so packages that they need to bring up a Linux system that will execute against the storage arrays. And literally have nothing to do with OpenStack, just enough drivers and smarts to generate that. We can archive these 600 packages into custom repos every time the Helion group turns its thing. Doing it this way, that two terabytes for all those things is now down to about 220 gigabytes. We only add a gigabyte or two every time we roll the repo. That's something that's manageable over the course of decades. That's one of the things that we can do with this playlist technology. We've got lots of benefits that we can do with playlists once we start representing entire repos as our unit of granularity or the thing that we're working with. We can do playlist manipulations, playlist math, playlist comparisons, playlist aging, playlist reductions and generate an awful lot of these benefits that you see here. I'm not going to go through all of them in this presentation but we have some of them going. What this really boils down to is we take the 65,000 packages we're getting about a thousand updates a week coming through Debian testing. We're doing a couple repo roles a month do the math, carry the three, whatever you want. We have a lot of data. We have some big data opportunities if we start thinking of the repo at the repo level. That's our minimum unit of granularity. This is the way that we're looking at a repo. These are the topics, areas, I can't remember the exact word, that packages are classified within Debian. Networking databases, there's about 20 of these. For all practical purposes we have added another virtual group to this collection called Helion. Those 600 packages that we care about for delivering our Helion product yes it draws from these other packages. Networking and databases obviously we don't pull anything from graphics I imagine but you get the idea for all practical purposes we've added another group to this giant thing called Debian repo. I want to turn it on its side a little bit and look at edge on and then look at more of them across time. This is a three dimensional representation I hope you see 3D in there where the major axis that we care about is time. Yes there's other axes and maybe there is some way to assign meaning to those that's a little over my head for data science right now. But when we have all of this data and we have this playlist concept and databases to support it we can start doing calculations and tracking predictions who knows what. This is a big data opportunity that we haven't fully explored yet. We think it's going to be very very useful to us in understanding the supportability and the maintainability of Helion based on H Linux. And that's where we're planning to go with this. We've already started doing a couple of things like that in terms of how we track stuff for example we took one custom repo I think this was one of those storage group sub repos of about 100 packages to to drive storage unit and look at the package volatility over time because we have all this data that we've snapshot it and stored. We can tell things like this there's useful information in there. Another way to look at it is if somebody gave us an arbitrary random list of 50 packages where is the closest repo that we have in history that would reproduce that set of packages from a unified repo. Now given the playlist we can spread it out over repos because it's just a playlist it's just a custom repo we don't really care where it comes from or where the packages are resolved. We can start playing games like this right now which we think opens up an awful lot of opportunities. Kind of summarize all of that H Linux is stuck in the middle between two very different rates of progress innovation. Velocities of product release if you will. OpenStack is moving at a very very fast pace a very very quick clip for how they do point releases it's almost weekly where they do a major point release of the complete thing. However the hardware on the bottom does not operate on a weekly schedule it operates more on a semi-annual schedule. We're caught in the middle we've got to balance those different velocities in the things that we release in a repo. Once we have all that data we can do forensics and analysis and predictions and regressions and all kinds of fun stuff that we really haven't even thought of yet we're still developing the data collection and the design of experiments phase. One obvious thing is to provide these legacy repos we don't know how long we'll keep them going but we are planning to be able to keep them going for decades, yes. Ah, I have to step in you know that most of your problems are already solved in Debian Are you aware of Snapshot Debianhawk? We are aware of those things we had some trouble getting a few of the things to run with some of the other constraints that we had doing but with that schedule that we were given but that doesn't mean that we need to keep doing it the way we started and collaboration with the community could actually mean throwing out some of the stuff we've done and taking what's already there if we can figure out how to get it to work for our needs. You should talk to Peter. He's always happy to help with Snapshot related tasks and nearly all of your problems are solved in Snapshot. That's good to know. That's one of the reasons we're here because we know we proceeded a little bit blindly or naively with good intent and we're hoping to find better ways to do things and give back and maybe do it at the same time. So thank you. The big data thing I've talked about a little bit and explained so I don't think I need to go into that a little bit. We really are here to generate a new level of collaboration and work with the Debian community. We need it. We need it for our long term. I say the phrase with a straight face, we really are here to help. We're here to play. We're here to be for the long term. We can't turn off the cloud. Sure, we could reverse our decision on choosing Debian testing but it's turning out to be a really, really good decision so far and over the course of the next year it's going to get set in stone and we're going to be chasing this for a long, long time. So we're not here to second guess ourselves. We want to hear feedback. We want to hear criticisms. We want to hear, did you think of this or why didn't you do that? We need the experts. The things we're learning, the things we're trying, the things we want to do, we think will not only help us but we think it's going to be of general interest to the Debian community. We want to understand how we can give that back. Is it just the results? Is it our techniques? Is it our code? We don't really know where that's going to go. We are looking for help and specialization in a couple of different areas. One is the general performance characterization and capture and tuning of a Linux system especially based on Debian. Repel management, obviously that's going to be our problem, our bugaboo for long after I've retired so I want to leave it in good hands for somebody else. How do we deal with bugs? How do we track them? How do we generate them? How do we integrate them? How do we help fix them? How do we foster Debian maintainers and developers within HP? These new areas, data scientists and up and coming area, we need some. We could use some. The collaboration opportunities are all over the map including maybe we sponsor interns to help take care of some of the things that we need directly for H-Linux or that Debian needs in general to be more sustainable and more viable in the future. It's in our interests to make sure Debian thrives. If you're already working for somebody else and you're happy with it, great. If you're looking for some other arrangement, great. We're here all week. And we know you've got ideas that we've got to hear. So introduce Bill and Cameron already. Also here is our director Steve Gehry, who is in charge of all things Linux at HP. Can that be true for a day or two? Okay, it's at least true for a day or two. It'll absolutely be true at least through tomorrow night where we're hosting a happy hour, kind of a late happy hour after the final evening session down at the Marriott City Center second floor, they've got the Den, dining entertainment and networking. So come down, have a little more late food, have some drinks. Talk to us about anything because we want to talk to you. We've got a lot to learn and we think we have a lot to offer. Are there any other questions? So you're rolling a new release every few weeks from importing packages from testing. Presumably because you want fresh packages. What do you do when Debian freezes testing? We'll let you know as soon as you do that while we're doing this. I can say we've been doing this since March April with real intent, I guess you might say. So yes, there's lots of pitfalls and potholes waiting for us and we'll cross them when we come up to them. Well now we know who to ask. Get his name. I was wondering if you have a similar question about the use of testing. Why did you use testing in the stable plus some specific backpots? Stable was not new enough in the opinion of the open stack developers. They wanted us to use unstable and I wasn't quite that brave. There was a little too much flux going on in that especially in some of the packages that looked like they were intended for open stack. I'm quite surprised about that because I thought that open stack upstream was using the Ubuntu LTS releases as a recommended platform. Open stack upstream is not a multi-billion dollar company trying to do good customer legacy support. We definitely want to support them but that was one of the trade-offs that we made. I was wondering about the integration testing and integration system. Do you intend to publish some code or at least some documentation about how you do it? I'm going to turn that over to the testing guys. So basically what we have in place right now is an alpha release of our test framework and it really allows us to independently drive almost any kind of test or any type of localized or external to that test framework ability to test outside of that. I think long-term we'll see that come back into the open-source community. We haven't had those discussions yet but I'll call it the edict within the company is that I shall contribute everything back to open-source. If you want to say something about that, Steve. Yeah, so long-term could be very short-term. Right, so again we've only been around a few months and the idea is indeed to put everything out in open-source and at this point in time by the end of the week if you can chat with Bill and tell them why that's a good idea. Oh, okay. Yeah, right there. So chat with Bill tell them why that's a good idea or not and how it might be used. We're certainly open to it. Long-term we would like to pick this entire machine up out of HP and put it all as open-source. We don't have the edict, we don't have quite the permission for it. We're trying not to do anything now that would prevent that in the long or short run, as Steve says. You did mention that you are doing copyright reviews of the packages. Are you getting feedback from lawyers or just from technical people? Yes, we have an entire... He seems motivated to go after that, so... No. So the OSRB process that Rocky referred to actually has across discipline review board. So there's different R&D technical people and there is a legal community that's looking at them as well. So we get plenty of feedback and we'll be more than happy to share that feedback with you as part of what we give back if there's interest. Okay, there's interest. Yes, there is. And some of that, the phosology effort which examines source is all open-source itself and we can find out more information on that for you if you're interested. Another question, how do you handle security support? Or do you have... I mean, do you have any specific support or special needs for security? Yes. Regarding software vulnerabilities. We don't break out a separate security repo. Some of the problems with... I'm trying to figure out where to find the answer to this question. Building with triple O and disk image builder, it is limited to a single source repo. There can't be multiple sources.list entries or alternate places to get packages. It all has to come from one fat repo. So that's why we've gone after this mashup concept where it comes from different places and we present it as one repo. So are you looking for how we address security issues or as a discipline? Many security issues. Many security issues. Along with the OSRB review we are doing a security review. Especially once we know these 600 core packages that are of critical interest to HPE Leon. Just like we have OSRB experts looking at the legal side of packages and licenses, we have security expert on the team who works with other people within the company to track all of the security notifications and the alerts that come from the rest of the world and compare those against the packages that we're pushing through. But are you actively trying to let's say you find out that you are vulnerable or a given package that you are using is vulnerable. Are you actively working on fixing them? And if so do you have some kind of involvement in the community? That's what we're trying to foster. At HPE we are the first point of contact for support, for bugs, for security violations or alerts. It starts with us. Over time we will develop more and more expertise to be able to handle more of the critical problems ourselves. However we can't do everything. We're not ever going to be that big. We are going to rely heavily on the community to do it. I think a really good example is Heartbleed. We picked up all the notifications on that and told all of our internal partners, but by the time we got that kind of announcement out you guys had already proposed patches and fixes and the chain of events was there so that we managed to offer a two-day turnaround inside HPE for that fix. We had it posted on one of our new repos before half of our development group even knew there was a problem. So we'll fix what we can. We'll contribute where we can. We rely on community involvement to figure out how to help you guys do that. Speaking as a member of the security team I'd like to we would like to know and have a greater collaboration because we know that there are several companies that use Debian and we think they provide security support like in this case but we don't know and we've never heard about them. I mean directly. We've never received an email and for some cases like in SqueezeLTS it is until everything is set up that we receive one email like yeah, we might be interested but that's about it. So please if you're interested in getting in touch with us at least we would like to collaborate. Absolutely. We have people in place, we have a plan in place they're skilled and experienced in this and we would love to have you tell you just how smart we think we really are. All right, we've reached a collective oh, one more, gotta have one more here. This guy's what's between you and dinner, right? So for software that doesn't come from Debian like Kernel and your own OpenStack branch to use Debian packages as a format to distribute software or Yes. One more invite for tomorrow night 8.30 at the Marriott City Center it's about 10 blocks straight down Broadway, 8 to 12 depending on who you ask very short walk, there's transit come by and see us, come by and let us say thank you and spend some time come by and tell us what we need to be focusing on and what we're missing. That's exactly, exactly and bring I don't know markers or something so that we walk out of there with good information. Just to follow up to Luca's question, are you making Debian source packages as well or Debian binary packages? Yes, we're still getting good at Kernel source getting those packages in after we've put our custom configurations in but we're providing source packages for all of the Debian packages where it exists, some of the foreign packages source does not build or comes from other companies that we're working on the Precona database stuff if any of you are familiar with that they take and muck with MySQL and do some stuff to it and push it on and that's what OpenStack needs that one's being a little bit sticky but we're working through it. And do you plan to publish the repository at some point? Yes, with the focus on at some point. But those 600 packages that we're using, that's we can completely share that. Essentially it is shared when you do an installation of Healy on an OpenStack, the packages are there so the mental hurdle against opening those up in a pure repo has really been breached. There just are some of the logistical and legal things about setting it up that we've got to work through. If you guys have got ideas about how we can be more open and share and be more transparent about what we're doing we'd love to hear about it and then we'll wiggle through whatever we have to from our side. Great, thank you very much for your attendance, really appreciate it.