 All right, now the clock shows 4.20, so why don't we go ahead and get started. This is the infrastructure team project update, so hopefully you're in the right room, and hopefully I'm in the right room. I am Clark Boylan, I'm the current infrastructure PTL for the OpenStack infrastructure project. If you're not familiar with what we do, we build and run the systems that help us develop OpenStack, and now more than just OpenStack, that includes tooling like CI, tooling to test the software, but also the code review system, so that our developers can review the code that they write, collaboration tools are a very common thing that we try to embrace and build upon, so we run list servers and ether pads and wikis and so on as well. There's a broad amount of software that we run to aid the development of OpenStack. So one of the things a lot of people may not realize is that we run everything on top of donated OpenStack cloud resources, which is wonderful. We dog food OpenStack, and we're also very grateful to all of the clouds that give us resources to run on top of. In alphabetical order here, we've got an ARM CI cloud, internet, limestone networks, Alenaro provides some ARM resources now, OVH, Pakatos and Platform 9, Rackspace, Vexhost. These are all public, private, sometimes somewhere in between clouds that give us resources to run the services that we do and to test the software that we're testing. So thank you to all of them. Over time, these clouds come and go. We add new ones, some leave for various reasons, but we're always very grateful to have those resources available to us. One of the things I'm particularly excited about that you may not know about yet is that we have ARM 64 test resources now. About back in February, the Alenaro team gave us access to a very small amount of resources on an OpenStack cloud that they're running. They've since expanded that to two regions. We've also added a second cloud provided through this ARM CI effort, and I believe it's hosted on a packet host ARM bare metal, and there's a group running OpenStack there to expose those resources to us. Which is exciting. We have three ARM 64 cloud regions that we can run tests on today that's still a somewhat limited set of resources, but they're available if there's something that you'd really like to see target ARM 64, we can work with you to get that testing done. Another thing that might not be super user-visible but is very important to us is that we're working to modernize our config management. Since the conception of the infrastructure team, the tooling that we've used has primarily been based around Puppet. We've gone through multiple Puppet upgrades, and we're currently in a three to four transition, and then after four we would be continuing on to five, but what we found is that those upgrades are costly. It hasn't made the transition across major releases of Puppet simple. On top of that, many of our team members have kind of shifted interest over time, particularly now that Zool is so Ansible focused, and many of us work on and rely on Ansible day-to-day. Ansible is quickly becoming far more familiar to us, and it's in our interest as individuals to work with Ansible more than to try and uplift the existing Puppet infrastructure. The plan today is to convert to Ansible for configuration management, which is nice because we're familiar with Ansible, but then to use containers as a packaging primitive so that we can bottle up things like, this is the stuff we want installed, and then have Ansible, say, execute and run that. We've already had a couple of milestones with that. We're running services without Puppet today. There are no services running on containers yet, but that's planned as well. Something to keep in mind is that this transition will take time, so Puppet will stick around likely longer than we'd like, but that's the reality, which means we're continuing to work on upgrading to Puppet 4. That way we have an out for those services that don't immediately move over to Ansible and containers. So Zool, probably heard a lot about that. At this conference, by the way, that's an ASCII Zool logo, which is kind of cool. Zool is now an independent project that actually happened at the previous summit, but we continue to work together a lot because there's also significant overlap in the teams, and we rely on Zool to do much of the work that the infrastructure team does. I think this is actually working really well for the infrastructure team and for Zool itself. It's allowed Zool to grow on its own with new users, but we've maintained a coupling there, and we work together. One of the key ways that we've been working together, which I think is great to illustrate this, is we basically act as beta testers for Zool. We update Zool and open stack infrastructure, make sure it works for a day or two, and then Jim can go ahead and tag and release for Zool with a fairly high degree of confidence that it's not going to break in catastrophic ways for other users. We happen to be a very big Zool installation. If it works for us, it'll probably work for you, too. Another big Zool thing that's happened in open stack recently that was largely driven by open stack itself, but the infrastructure team, particularly Andreas, has helped quite a bit with is the move of configuration of the jobs into the projects themselves. This was motivated by the Python 3 transition because open stack needs to be able to switch things from Python 2 to Python 3, and not needing to go through the infra team to do that is valuable. Yeah, so we won't be blocking that transition. The job config is in repo. Each project can kind of manage that as they're ready and Python 3 able. Unfortunately, due to the way secrets handling works, we've also got a few jobs that have to remain in the infrastructure configs just because things like publishing a PyPI and so on require access to secrets, and we've been the keeper of those secrets historically. So top-level project hosting. You may have heard about airships, Starling X, Zool, Kata, so on. One of the things we want to do is become homes for these projects that are not open stack. Historically, you may have heard of Stackforge. We've always been welcoming to projects that aren't open stack, but it may not always have been clear that not all of the projects we host are not open stack. And we want to kind of improve that situation. And we've updated things like mailing list hosting, documentation hosting to allow us to host the, I believe we're hosting the Starling X stocks now. We host mailing lists for all of these projects, Airship, Kata, Starling X, Zool with their own, I don't want to say branded, but under their own domains that don't say open stack on them to make it clear that they're not open stack. The next evolution of this is what we're calling open dev. We've taken ownership of this domain, opendev.org, which will allow us to host all of these projects under a neutrally branded domain and services. And then that way it's clear that open stack is no longer special when it comes to the infrastructure team. Effectively, we will become the open dev infrastructure team rather than the open stack infrastructure team. While this is making open stack less special, open stack, we don't expect open stack to go anywhere. It's still going to be our largest user simply by the fact that it's been around, there's lots of resources that they're using. They know how to use the system effectively and I expect them to continue to take advantage of that. But also keep in mind that this is going to be a slow transition. There's, because we don't want to, you know, kick open stack aside, we want to maintain that healthy relationship where we're going to move slowly and make sure we don't break them. So expect services one or two at a time to kind of transition as we figure out what that looks like. In the kind of more immediate future, we'd like to continue pushing on the modernization of config management. I think that's really important for the long-term sustainability of this effort. If we're not going to be able to support a bit long term, we shouldn't be using it. The Garrett upgrade continues to be a big item which is tied into that modernization of config management. Monty has plans to take advantage of the container deployments to make that easier. We'd like to improve IRC bot systems. We recognize that we've grown to the point where FreeNode is clunky, especially with the number of channels we need to talk to and there are tools that we can use more effectively to improve that. And then, of course, OpenDev. That's going to be a slow transition, but it's going to be an important one that we need to continuously push on and keep moving. So that's the update I had, plenty of time. This is how to contact us. We're on FreeNode, that pound OpenStack Infra, that may change with the OpenDev transition, but that hasn't changed yet. So this is where you can still find us. You can email us at the mailing list, openstackinfra.list.openstack.org. We're here at the summit, though the summit's almost over. And then we've got our documentation here. You can poke around, take a look. And feel free to ask questions if you've got them. Anyone? Going once? All right, then. Well, thank you for your time. That's the infrastructure update.