 starting now. Hello everybody and welcome to State of the Open Source, OSU Open Source Lab. My name is Lance Robertson and I'm the Director of the Lab here at Oregon State. So what we're going to be talking about is a variety of things related to lab. First, I'll kind of give you an overview of what the lab is about. I'll talk about the students that work here, talk about current and new services we're planning on doing and other infrastructure enhancements we've done and goals for the remainder of this year. Feel free to ask any questions. I'll try and answer them as I can if not at least get them at the very end of the session. So first off our overview, what is the open source lab? You can kind of think of us as a free and open source software hosting company. We offer free and low-cost hosting services to open source projects, whether that's co-location, virtual machines, or other cloud services that we can provide. We also have access to a wide variety of architectures, which can be useful for some projects. We also provide software distribution and mirroring among many other things. On the flip side of it, we also mentor undergraduate students in DevOps. So they get that B hands-on managing these systems, interacting with the various projects, dealing with tickets that come in, getting to understand all the various aspects of running a company, so to speak, a hosting company of some sort, whether that means you know you have the technical debt worth versus the newer stuff we're working on and all of that. We also have a lot of past graduates that have been co-founders of various big projects and companies such as CoreOS, and a lot of them currently have key roles in very high-profile technical companies throughout the industry. Right now the staff is me, is the only full-timer, and I have eight undergraduate students that work for me. So some history, we started back in 2003, way back then, and it was co-founded by Scott Kaveaton and Jason McCurr. They were both working here at OSU and Information Services, and at the time they had some tie-ins with some open source projects. Back then, there was really difficult ways for a lot of open source projects to have some stable hosting for a lot of their services, so they reached out to some of these projects and said, hey, we have some rack space at OSU and we're willing to host some of it, and they kind of started it from there. And then beyond that, it just kind of grew. So we've offered co-location hosting for a lot of open-source projects, mostly larger ones, so Gen2 has their own rack, Debian has several servers. Frino was one of our early projects, we don't host them anymore, unfortunately, but that's mostly because of all the denial of service tech that was happening. Also, from there, the growth just grew through spread of mouth for a while. We hosted Kernel.org, we currently still host the Apache Software Foundation, Drupal, and we still host parts of the Linux Foundation, but most of their stuff has been moved off of Clash Buttons and Clouds. We started out with some seed funding. OSU saved a lot of money switching over to open-source in the early 2000s and the CIO at the time, Kurt Peterson, he was really a promoter of a lot of this and they were really crucial in saying, hey, we want to kind of make this open-source hosting a bigger thing. And so that helped out for some initial funding. And then Google and RealMedia, if you remember them, were initial big sponsors of the open-source lab. We've gone through a variety of organizational changes. We were under the Information Services Division, which is a non-academic unit at OSU for many, many years. That's where we started out with. It was okay, but it was kind of a weird fit for us within the university. And finally in 2013, we moved into the College of Engineering under the computer science, electrical engineering and computer science school. And that's where we've been ever since. And so we have a little bit closer tie to the academic side of it. We've kind of collaborated with another unit on campus that was called the Business Solutions Group. They did a lot of the software development primarily in .NET at the time. But they still kind of do that. So we've created a new unit called the Center for Applied Systems and Software. And the OSL resides under that. And we're both unique little different units on that. But we both are doing really great and have a lot of students involved in variety things. That other unit, they're doing a lot of various projects that aren't related to .NET anymore. And they're actually doing, I think, some software projects with the Linux Foundation. So they're quite active. We always get asked about how do we get our money? For one, we don't get direct funding from OSU or the state of Oregon. We're soft-funded. So we primarily get our funding through corporate donations, such as IBM, Google. More recently, we've had it at Ampere Computing for our donors. We also have some hosting contracts. So for many, many years, the stuff we did with IBM was done through just donations. Last year, we switched it over to a more formal contract because of the stuff we were doing. And so that's now an actual contract. We also have some contracted hosting with the Linux Foundation, Drupal, and the Open Source Robotics Foundation. We've had some other projects over the years that we've done with that. We mostly only do that with larger projects that have the funding to support that. We don't require any of the projects that normally host us to pay for anything unless they're able to. Of course, these projects help seed and fund the other projects' ability to host that helps pay my salary and the student's salary, of course. Beyond that, we also get in-kind donations. TDS, which is a telecommunications company based out of the Midwest. They provide bandwidth for our FTP mirroring. So we have a mirror here in Corvallis. And we also have one hosted in Chicago and New York City. And we don't pay, we only pay for co-location. We don't pay for the bandwidth. So we have 10 gigabit connectivity on both of those servers. Actually, I think technically 20 gigabit because they're bonded. And so that's why our mirrors are so fast. And so thanks to TDS for that. And in years past, we've gotten a lot of in-kind hardware donations through Intel. A couple years ago, we got some Arista switches, a whole bunch of them from the Hudson Trading Company, which was really great. And of course, we get support from individual contributors as well. So thank you for all of your support. It's been great. So what's our role in the FAS ecosystem? First and foremost, we want to provide a neutral hosting facility. We want to foster those relationships between open source and businesses, especially. So a lot of times we have businesses such as IBM or Ampere that come to us saying, hey, we'd like to have projects have access to our hardware. Can you help facilitate that? And so we kind of act as the go-between and make sure that we keep everything neutral between all of it. We also provide a stable physical home for a lot of the core open source projects by offering free hosting. And really what we try to strive to do is be as flexible as we can within reason to the needs of each project. Each project has their own specific needs for how they want to manage their systems, how they want to host it, how they want to deal with security. And we want to make sure we try to tailor that everything we can to make it fit for the best what they can. Something else that we've been able to do in the last several years is access to other architectures other than x86. So we've had a long relationship with IBM and open power, more recently with ARM via Ampere computing. We don't have any RISC-5 machines, but we are supporting some projects via some hardware doing an emulation for supporting that architecture. And we have a couple of MIPS machines as well to help support some projects. So we're trying to open that up as much as we can. We also offer a lot of compute and storage resources, whether that's our software mirroring or if there's continuous integration delivery, we have some bare metal machines that we could use with that that I'll talk a little bit more later. The other thing we offer is systems engineering expertise. So a lot of projects, they don't really want to deal with managing a system and software updates and all the new things that need to happen that kind of keep things going. So we can help do that if a project wants that. We don't require you them to do that, but we're here to do that. And so we're really proud of being able to do that. I know one project we helped quite a bit is the PHBB project. We've helped them out throughout the years. And the other thing we do is we help train the next generation of open source leaders. Many of our students, we have over 100 students and staff that have gone through the program that are all over the place and they really make a really big impact in the ecosystem as well. So let's kind of go through some of the new projects that we've brought on board between 2019 and 2020 this year so far. So we've brought on a variety of general hosting projects, most notable I would say. We've brought on the personal taco project. We're doing some mirroring hosting for asteroid, snowdrift.coop. We also brought on board. We helped the media subsystem of the kernel. Biome is a community rebuild of the Habadat project part of Chef. We recently started hosting the OpenDFS Wiki and the list kind of goes on and on and on. We just started offering ARC64 or ARM64 hosting. First off with Minio and SYNC project. SYNC is actually a community rebuild of the Chef project as well. I'm actually a part of that as well. But we just started doing that and I'll talk a little bit more about that later. On the OpenPower side we have a slew of different projects. We've on boarded over the past year and a half or so. You can kind of see the names that we have there, but it's really amazing seeing all the projects that we've helped brought on board that help support the PowerPC little and dating architecture. Now here's a kind of a half of the list of projects that we host in some capacity. Some of these might just be a single website. Some of them might be just a mirror on our FTP mirror. Some of them might be a VM. It could be a whole rack of gear. It's just a lot of various things that we provide. But there's a lot of various projects that we try to bring on board and here's a continuation of that list. All in all I think it's over 150, 160 projects, but that doesn't count all the sub-projects that some of these projects have. For example, the Apache Software Foundation has many, many projects under its umbrella that we host and the Eclipse project is also another one of those. So it's quite a wide array of things that we provide hosting for. Then I'll talk about the students at the open source lab. First off, let's talk about some alumni. As I already mentioned, some of our more prominent alumni are Alex Bowley and Brandon Phillips. They co-founded CoreOS, which became a part of Red Hat, which is now part of IBM. But they've been active in the community for a long time. And then we have several folks that work at the Linux Foundation through the years. We also have somebody at Microsoft, Sarah Cooley, I think she's still there. But she helped a lot with the Hyper-V hypervisor and some other various things in the background and has been a part of the open source ecosystem there. And we have a lot of other students either in AWS, Tesla, Mozilla, Red Hat, you name it, they're all over the place. They're really getting out there. So what do the roles students have at the lab? So they interact with the open source projects on a pretty much a daily basis, whether that means deploying a website, troubleshooting issues, giving them new access, maybe bringing a new project on board, getting them access, kind of working through all of that. They deal hands on that. And obviously I help mentor them in that capacity. We primarily use Chef for our configuration management. So they get involved quite heavily in our Chef cookbook creation and maintenance. We use a lot of wrapper cookbooks for services and how we manage all of that. And so they're doing a lot of fixing and updating of current cookbooks and adding new things and making sure that the test pass and all of that. So that really gives them a good experience. We also give them a lot of hands-on experience, which is becoming more and more difficult to do in these times with everything moving to the cloud. So it's hard for people like AWS and other cloud providers to find students that have actual hands-on experience with hardware. And so we try to help provide that in some way. So we have a variety of different systems that we have. And I try and get my students involved with rocking and plumbing and making sure everything's working and learning how the out of band management works and all that fun stuff. And the other part is that they're part of the ticket queue rotation. We use RT for our ticket queue. And so every week we have a student that's in charge of a ticket that comes in. That's what they get assigned. And so that kind of helps with ensuring every student has access, has knowledge of every part of the ecosystem that we provide. But we've had problems in the past when I was assigned a student, a specific thing, they would, you know, they would be busy with school or finals or whatever. And I couldn't find somebody to work on something specific. So it just ensures that it works really well. So it's worked really well to be able to do that. And they get to interact with a variety of different open source projects. So the hiring process, I think we finally refined it over the last several years. And essentially what happens is OSU has a jobs board, basically a job website that we put our position in. And then we get applicants through that. And we kind of scan through the applicants. And then we send them kind of an open book quiz and say, we ask them some very basic questions about Linux. We send them a couple of those simple bash and Oc exercises. A lot of times this is their first experience even knowing that Oc exists, for example. And then they're like, oh wow, you can do a lot with it. And then we also have a simple chef exercise just to see if they're able to be able to get something going on their workstation. And a lot of this is just to ensure that the applicants are passionate enough to be able to complete the quiz and kind of set the bar to a level that we know that they're interested, they want to be able to do it. But we don't want to set the bar so high that they're not able to attain it. It's not really that difficult. Once they pass that, and I feel like they do pretty well in the open book quiz, we do an in-person interview. Now actually this last round, I actually had to do everything over a video conference, which was fun. But it worked out really well. But our in-person interview is set up similar to how an interview probably would work in a job applicant now. But the questions aren't nearly as difficult, but some of them are. So we try to test the gamut of what they're able to understand. When we tell them right off the bat, we don't expect you to know everything. And it's more about can they learn, can they adapt on the fly? And can they, if we give them a little answer and see like, okay, now they know that that's how that works, can they figure out the rest? And if they do that really well, that's really good. We just want to be able to see if they can problem-solve on the fly and being able to, their personalities kind of fit with what we were looking for and how we are able to make things work. So that really does a decent job with that. Now the onboarding process is also set up and we've kind of refined it, and it's continually getting updated. But basically we have a walk-through guide. One of our students a couple years ago actually turned it into a little gamified thing, kind of going through a thing that you have to finish up. And so first is just getting all of their accounts set up and make sure they're able to access all of the things. And then we make sure that they're able to access our internal documentation and show them how they can contribute to that. So the first thing they do is add themselves to some contact lists on the Wiki so that we know that they know how to do that workflow. And then we have them read a variety of free and open-source guides on the basics of Linux. And so we have them get a better understanding of that. And then after that we have them go through a test onboarding process of learning Shafs and kind of getting them an understanding of that. And since we use Shaf, Shaf uses a lot of Ruby. There's a lot of nuances with that. There's a testing framework that we use with it. We actually use our open-sat cluster to spin up instances to test and do all that. So we make sure that they're able to connect to all of that. So we have this test cookbook that has some known issues with it. And we have them walk through some scenarios that kind of go through that. We set it up with a pull request on GitHub. And we provide feedback, have them go through it. And that's kind of an easy way for them to learn how to do a lot of the stuff they'll typically do on other cookbooks that we have. And the other great thing is the senior students that have been around for a while, they also provide feedback and help mentor the students and make sure that they understand what's going on. Because a lot of times they have a better viewpoint than I do, because I've been doing this for so long, you're like, oh yeah, here's the thing that you learned that you needed to do. And then after they go through all of that, we start assigning them some simple tasks. So within two to three months, we put them into the ticket rotation. The three folks that I just hired on recently, we're actually kind of onboarding them a little bit sooner than that. And they've actually done a great job of getting things going, even though they're remote. I even have one student that's working remote in Ohio, which has been interesting. But yeah, they get to be onboarded and do things really simple. Right now the students, they're doing some, we're trying to update our cookbooks to support Chef 15, version 15 and kind of also put in some code cleanup and stuff along that. So I have a task for them to go through all of our wrapper cookbooks and fixing all of that. So that's been really great for them. Let's talk about some of the current and new services we've done, we've had going on. So our managed platform, what we actually provide and manage, we're a sent off shop primarily. We're still mostly on version seven. We don't have anything on eight other than two machines. We're still working towards that. We have a couple of six servers that we're still working on migrating away. We still have a few months left to get that done. For our workstations, we use Debbie and Tim. We have them set up in our office and actually our students will log in remotely through the VPN to connect and do all the development through that. So they have a unified workstation experience on how they want to manage things. And all of these are managed via Chef. So it makes it really easy to test and make sure everything is identical. I've already talked about Chef. We provide, we create wrapper cookbooks. We use a lot of community cookbooks. I've recently become more involved in a community project called Sue Chefs, which is basically a group of people that manage a bunch of widely used open source cookbooks, such as ones that manage Apache and other various things. And so I'm actually getting my students involved in fixing those things upstream. So they're getting even more experience on managing things from that point of view. The nice thing about Chef is we're able to do a full unit and integration testing. So we can use Chef spec, which goes on top of our spec to kind of test and see, like, do these resources we have set up? Are they working the same way that we expect, even if we have some conditionals in there? And then after that, we tie things with something called Test Kitchen and Inspect. So Test Kitchen will spin up the VM from scratch. It'll converge everything that Chef has. And then we can run Inspect to test and verify that, hey, this service is running. The port is listening. This actually has this content on this website. Can I run these commands? And does it show this kind of thing? Basically, it's a system working as we expect it. And it's really great, especially if you're doing upgrades between major versions of stuff. So that's been really, really good. We have a Jenkins pipeline to automate all of the testing and deployment and promotion over cookbooks. It has its quirks. Hopefully we can fix some of those things out in the long term. Some of our wrapper cookbooks or open source, some of them aren't. Basically, the ones that aren't have maybe more site-specific stuff that we don't want to maybe have visible. But our long-term goal is to kind of move all of that stuff out into one place so that we can have everything opened up and do that. We have a lot of legacy systems, as many people in an IT organization have. Like I said, we still have sent us six systems. And we actually still have some Gen2 Linux systems from our early days when we first started out using Gen2 Linux. We're only down to a few hosts with that, thankfully. I'm hoping to get those done. And we still have some host managed with CF Engine. But those are quickly going and dying down quickly. I'll talk more about that in a little bit. So our hardware, we don't really have a hardware budget unless we have enough in-kind donations to kind of cover that. We really heavily depend on in-kind hardware donations. So if you are out there and you have some hardware that's no more than three years old, ideally, we'll be interested. Let me know. But over the years, we've gotten some donations from Intel via the Migo project when it was hosted with us. A lot of that hardware is actually running a lot of infrastructure, but it's aging that those systems are getting to be up in the seven to eight-year-old time range at this point. But they're still working. More recently, we got some hardware from EMC at the time, right before they merged in with Dell. That's actually primarily hosting our OpenFAT cluster right now. But again, we want to be able to grow that as we can. In 2016, we also got three OpenCompute project racks from Facebook. It's their pre-production racks. Their previous iteration they had, and so we have 90 nodes. Unfortunately, we couldn't fit those. Actually, I'll talk about this more later. I have that, but they're pained to handle, but they're being used heavily by a lot of projects. And then more recently, we got pallets of ERISA 10 gigs, which has been really nice. I haven't been able to use all of them yet, but it's been great to be able to upgrade some of our backbone to 10 gigabit from one gigabit. So as I mentioned, we have a wish list of things. We really want to have some kind of one-year-to-you compute nodes or storage nodes, also looking for just hard drives of them, purchasing them as we can. And then I really need to be able to replace some of our core networking infrastructure and kind of upgrade to 40 gig for our in-rose, which is and also upgrade some of our one gig top of the rack switches as well. So here's a list of some of the core infrastructure managed services we provide. So one major thing that we still provide is a mailing list. We have over 200 lists hosted on it. It's Mailman version 2.0 right now, or 2.x, I should say. I haven't looked at Mailman 3.x quite yet, but that's on the radar. It's a shared instance, but we can, if there's a larger project that wants to host their own one, we can do that. We also provide simple email forwarding that includes spam and virus filtering. We also can do email store if you want. We only have, I think, one or two projects that are doing that, but we're able to do that. And then we provide DNS. We also provide some simple web application hosting. And then we can also provide some engineering consulting for projects if they're running into issues and so forth. So that's really what we like to do. So we kind of have things split up between managed versus unmanaged hosting. On the managed side, that means the OSL is managing the operating system, all of its updates. We're configuring and managing all of the services. We help with, we maintain the infrastructure design of how that's set up. We have monitoring and remediation involved with that and everything is managed with shaft. Typically with some of those projects, we create a special cookbook just for that project. So that way that project can get insight in how that system is managed. And if they want to contribute to it, they can as well. But that's worked out really well. We do that with projects such as we have some managed services with the Linux Foundation with PHBB and some other projects as well. The other side is just unmanaged. So we basically spin up a host. You give us a simple account with pseudo so we can deal with issues as they crop up. And you do everything else from there. And that gives you a lot of freedom. But also, you have to be careful how you use it and how you manage it. We have tried managing or trying doing a mix of this where the managed layer is much more minimalized. And that's helped with some projects. So we have to make sure that they understand this part of it. If you keep messing with it, the configuration management system will just override it. So we've been trying to find a nice balance between those two. For software mirroring, as I mentioned, we have a three server cluster. They're split between round robin DNS. We don't have anything fancy like geolocation DNS. Although it probably would be nice to be able to do that. We push around. Our bandwidth is generally about 1.7 gigabits across all three nodes at the same time. We currently have 15 terabytes of capacity. We're currently about 12 terabytes. Actually, I think it's closer to 13 now. It keeps growing and growing and growing. Well over 100 projects and repositories are hosted there. This is actually running on Power 8 systems. We got these donated to us through IBM several years ago. They're quite beefy. They're probably quite overpowered for what they are. But it works out really great. We're able to do a lot of file system caching with that 256 gigs of RAM. And we use a tiered storage system that has some SSDs on it, which is really nice. And the 10 gigabit uplink is really nice. As I mentioned, the two systems in Chicago and New York actually have two interfaces connected up in an abundant interface. So we have well more than enough bandwidth to make that stuff happen. On the co-location hosting side, we have about 300 or more servers that we host in some capacity. Some projects have their own racks or set of racks. So J2 has their own rack. The Linux Foundation has several racks. Drupal has a couple. Apache Self-Foundation has a couple. And then otherwise, we kind of spread it out throughout the data center. The data center is actually shared with the university. But I'd say we use about one third of that room. It's about 2,500 square feet, the primary data center. And yeah, we have around that. Projects own their own hardware and they ship it to us. We don't buy the hardware for them. If we have loaner hardware or if we have spare hardware that we got from some in-kind donations, we'll certainly loan it to projects at no cost. But that depends upon if we're able to do that. But we do have some requirements if you do some of this hardware. But one thing, we need to have a reasonable need for having a physical server. And this is primarily because once we bring in yet another physical server, that means there's a possibility that we're going to have to go in there and fix something. And hardware issues always happen. And depending on how things are set up, we just want to make sure everything is stable and we don't have to do that very often. We've had problems in the past with that. We want to make sure it's rock-mountable, includes rails. We've had problems where servers are sent but with no rails. We want to make sure there's some kind of out-of-band management via IPMI or serial, which is really critical if you're trying to do a remote install or there's an issue with the system and you want to see what's going on. We want to also make sure that it's built by a vendor that actually tests the hardware. We've had projects before send them. They've got their super micro box and they just shoved everything in there. And of course something doesn't work. And so then we spend more time troubleshooting a weird little hardware issue than actually getting it to work. So that's kind of the thing we want to do. And of course, exceptions will be made with some special architectures. So a lot of the other architectures are kind of weird. Like the MIPS machine we have is actually I think a little switch box and we're able to rack it and kind of make it work. But we'll make it work within reason as much as we can. But we have enough physical capacity and power, including right now. But it's just something I want to make sure. It's like we can't bring on like a huge new project without thinking about it quite a bit. We try to move folks over to our virtualized platform as much as possible. For storage, we're primarily using Seth right now. We actually have two separate clusters. We have one that's on the open power open stack. That was a set of the power eight systems that were donated to us from IBM. So we have that dedicated for that project. But then we also have an X86 based cluster that's an eight nodes that are actually powering our open stack for both X86 and the new arm one that we just created. These are both deployed in 2018. It's been great. As I mentioned, both of these actually are running the Mimic release. We haven't upgraded to Nautilus yet. I'm getting that to that at some point. I'm slowly upgrading the X86 cluster from two to four terabyte drives. I'm slowly getting there. The COVID-19 has put a damper on that a little bit. I have been going in there and replacing one every day. We have a 52 drives I got to do, but I'm getting closer. And we have some NVMe drives that were donated from Intel that we're using, but are great. And using 10 gigabit networking for all of that, which seems to be good enough for what we use. On the open power side, we have quite a bit more capacity on the storage side and networking. We got a 40 gig switch donated from Milanox several years ago that we're using for that backbone. But we don't really need that much, but it's working out really well. As I mentioned, we're using stuff. And the current uses is primarily just block storage for OpenStack. We also have SFFS running. It replaced our old GlusterFS system that we were using that wasn't quite right. And plus we wanted to only manage one storage system. So we have that set up. It's our POSIX file system. We only use it in a few places. We'll see for archiving files and things like that. We don't have object storage enabled, but that's something I certainly want to look into. We really haven't had a lot of projects interested in doing that, but that's certainly what we can do. And we also want to expand it. We want to upgrade it. We want it to be nice to get newer hardware for it. For our older Geneti platform for virtualization, we're considering maybe moving that over to SFFS as well. We might not, just depends. And then if we get additional hosting capacity at other locations, it'd be great to get geo-replications set up as well. I've kind of been mentioning Geneti and OpenStack at the same time. So we have two virtual cloud platforms that we use. We have the first one that we started using, which was as Geneti. It was the project that stemmed out of Google. And currently it's managed primarily through the community. At the time, it was before OpenStack existed. It actually existed before Libret really was used widely. But it used Zen originally and then KVM, which is where we started using it. It's really stable and easy to maintain. But the draw side is it's not really cloud-centric. It doesn't really have a public API. It has an API, but it's really designed for the admins to use. We've been using it since 2009. And it's great that we don't need to have projects have access to it. But a lot of projects want to be able to be able to see the console and spend things up and that sort of thing. And so we haven't really been able to do that. More recently, we've been using OpenStack quite a bit more. The problem with OpenStack, as everybody knows, it's really hard to maintain and do anything with sometimes. Thankfully, it's gotten really stable over the years. It has a really nice public API, and we've been using it, as I mentioned, since 2013. It's been great for doing self-service and being able to expand what we want to be able to do and give a lot more flexibility to projects. A little more information on Gannetti. We're using DRBD primarily for the local storage. So all those systems have local storage to be able to do the things they need to do. And we just replicate it like a RAID 1 over the network, essentially. We primarily use the command line interface. We, at one point, did have a web interface, but we deprecated that because it was an old Python 2 project and we didn't have any student devs working on it anymore. Our current production cluster runs around 100 VMs. We have eight nodes. PhBB is running on it, Busybox, Buildroot, Ross is on there. Jenkins Project, I think their JIRA instance is running on there. I think we have a VM for QMU, which is kind of ironic. And a variety of other different projects. We also have a couple of other clusters specific to projects. So the Python Software Foundation has our Gannetti cluster that we manage, CVCRAM. And then we have our internal cluster so that if we do any, we don't want it to have impact on our production services on the main cluster that projects use. So we kind of have that separated. And we really use this for the long-running VMs and more of the traditional services. But we're using it more less and less. We're trying to do more things in OpenStack. As I mentioned, we've been using OpenStack since 2013. We actually created our first cluster on PPCC64 Little Indian first because we needed it. And then we more recently had an X86 cluster open up for projects in 2018. The reason why we didn't have an open up sooner is because we just didn't feel comfortable to have it at production level ready and dealing with upgrades. We wanted to make sure we could upgrade each release. We can handle the technical issues that come up and it's stable on us. And we finally feel that at that point. Everything right now is powered through KVM and CESF. We have around 75 VMs on eight different compute nodes. Those are those nodes that we got from EMC and they're working out really great. And so some projects that are running on it, the Ohio Linux Fest is on there, GNU Radio, Forman. GLAPC has some instances on there for doing CI. Let's see here, some other notable ones. ZIF Project is a new one on there. And Vingo recently was added on there as well. And I think that was one of those projects that was originally on Rackspace. And then they started getting built and they were like, hey, we need a place to go in the kingdom. So that's always great to be able to do that. On the open power side of things, we've been doing it with IBM for over 10 years. We provide open source access, free access to PowerPC64 and Little Indian architecture. We have a cluster powered on both Power8 and Power9. I have over 100 projects on that cluster. If you're getting any binaries that are PowerPC, Little Indian, they're probably coming from our cluster, which is awesome. This is being funded and supported by IBM. In addition to the open side cluster, we have some bare metal machines for some of the bigger projects. So the GCC Compile Farm Project, Debian, FreeBSD, and Alpine Linux all have their own systems. And we help manage the firmware on those systems as they come up or any hardware issues that come up. So that's a burden that we try to take on that the projects don't need to do. In the last few years, we've also been doing some GPU hardware access. We've been collaborating with another unit on campus called the Center for Genome Research and Biocomputing. They do a lot of HPC-related stuff. They manage the hardware, but they allow us to use some of it. We have a form that if they want to have access to that, they use a more traditional HPC scheduling engine for that, but we also have a Jenkins Managed Portal that we can have. So basically, we have a system that has NVIDIA Docker running on it, and you can run jobs on that through Jenkins on Docker, and then it'll actually connect to the GPU and do tests with that. I believe we're doing that with the TensorFlow project, for example. Other things, we actually also provide IBM Z. We don't actually host it here. That's actually hosted at Marist College in New York. But we have two LPARs provisioned for us, and these are running as Jenkins worker nodes right there. And we have them, you can spin up and do tests. You can integrate into this CI, into our Jenkins CI portal that we have set up, and you can run tests and do things. We don't have it quite as open as our OpenSat cloud per se, but it still provides a lot of access. So check out our website if you're interested in S390X. That's there. That's great. Beyond that, we also provide some hosting for AIX. We don't manage the AIX part. We let the community do that, but we're helping the AIX community being able to build open source software on it so that if you are a user of AIX, you can still get the benefit of using open source software that you require on that. And so we actually just got a new Power 9 machine shipped, I think, and I get to rack here in a couple of weeks or so, and get that going for them. So that'll be great to get that going. One of the newest things we brought on board, and we really haven't publicly announced it yet, until now, is our AARC-64 or ARM-64 development hosting. So this is very much similar to what we'd done on Power, but now with ARM-64. We've done this in collaboration with Ampere Computing, which provides, they've created their own server-based ARM-64 hardware, and they were able to get a target for that. So this is yet another OpenSat cluster. It's based on the same things that we did with the Power architecture. So we provide false access to that. It does support ARM-32 for those projects that need to have that access. I don't quite have images up for that because of some nuances with how that's set up. Unfortunately, you only can support ARM-V7 and above. For that, we can't do the, I think, V6 and below, because which would impact if you were trying to support like the older PI's, I think like the original PI, the PI-2 and I think zero is the one that we can't do. We'd only emulate that on that, but it's there. If you check out their website, they have the EMAC server. So we have a total of 12 systems. Right now we have six allocated for the OpenSat. It's intended that the other systems are idle for other expansion, whether to expand the OpenSat cluster or for projects that have their metal access. So we launched this in May 2020. We have a couple of projects on it. If you're interested, please check out that URL and send us information. We'll try and get you going. It's really great and I'm really happy. I'm really amazed at how much power the systems are not using in the RAC. I think I figured out that the two Arista switches and the top of the RAC switch was using 40% of the power of the RAC and the rest 60% of it was just the servers. It's amazing. I think these servers idle at like 170 watts. They're actually Lenovo branded machines, servers, 2U servers, but they have an ARM-based board on them. It's really awesome. So let's kind of move on to infrastructure enhancements. As I mentioned, we got this cluster deployed. It's been great. We've also been doing a lot of Chef upgrades and cleanup. We updated to Chef Client 14. We're only a year behind, but we're catching up. We did some more. See if engine, the Chef migration. We upgraded our Chef version from Donald's to MIMIC. We're three-quarters of the way of upgrading our hard drives to four terabytes. That's great. We also upgraded from Queens to Rocky. And we also did a lot of removal and replacement of a lot of legacy systems out there. There's a lot of projects that had some older systems that we either moved on to a shared system or we moved them on to a newer system and kind of just did a bunch of cleanup and made things a lot easier for us. We have over 140 projects that are managed with Chef. As I mentioned, we've got the 14 done. We're currently doing 15 right now. We haven't switched any notes to it yet, but that's some of the projects that my students are working on. They're doing all the updates for that. We're actually switching to Sync, which is the community rebuild of Chef. I'm a member of that community quite a bit. I won't go into why there's a community rebuild, but it's there. We're also updating to the latest community cookbooks as best as we can. There's a lot of refactoring that has been happening upstream that will be great for us. We've also updated a lot of our testing to include some multi-note testing. There's a kitchen terraform plug-in that uses terraform. So we actually use terraform to connect to an open stack instance. And we can spin up, for example, when I was testing the upgrade for open stack, I can spin up a controller node and a compute node on top of open stack and have that open stack instance running, and then I can actually go through an upgrade task and step and be like, does this work? Is everything working? And the same thing happened with SEP. I can set up a three-node cluster. I can set up a node that mounts the SEP-FS. I can set up the monitor nodes. And I can make sure that all the communication is happening in between that. So I had a student go through and kind of clean up all of that to make that work, and it's been going great. And we're also doing a lot of peer review, adding more to our test coverage, and trying to keep up to the coding standards as best as we can. We have 13 systems left on TF Engine. It's a long time. A lot of these can be knocked out pretty quickly if we can just catch up on a few other things. I really want to try to get our email relays and our VPN upgraded and switched over because that's pretty critical right now. The others will be getting to them as soon as we can, but I think we're on track to finally get this done by 2020. I gave this talk, I think, last year, the year before, and it's a long process, but we're getting there. On OpenStack, we got upgraded to Rocky in 2019. We're going to be doing sign here, hopefully in the next month or so. I'm actually one of the maintainers of the OpenStack Chef cookbooks, the upstream cookbooks we have, and I'm trying to get them updated to the latest release. Currently, I think we have it updated to train until we're trying to get up to the latest new release. We're also planning on adding IPv6, enabling some services to possibly support running Kubernetes on the cluster, try to help DNS integration, so that's working. And also, we're needing to really split out some of the node services. So the node is actually, or the controller node is actually a Genetti VM running on a system, and it has its limitations, and we're really starting to run into that. So we're working on moving at least the networking and some of the other things away from that so we can get some performance improvements on that. So that's some things that we're planning on doing. On the open compute compile form project, we have this hardware we have donated. This hardware, I think, is circa 2011, but it's still pretty useful. We have around 59 of the nodes allocated. We have a variety of projects using it, as you notice here. This is where the RISC-5 project is using some stuff. They're actually running on X86, but emulating it as VMs on there. FDROID is doing some builds. NOME is using it for some of their GitLab runners. Actually, several of these projects are using them for their GitLab runners for their CI pipeline. VLC is doing it for some of their CI stuff as well, and been great for that. The logistics of using this hardware is difficult, though. This actually isn't in our primary data center because I can't fit the racks in our data center. They're too tall. They won't fit in the elevator, and even if I could do that, I couldn't fit them with how the network cages are set up in the data center. And then on top of that, the power requirements are unique to don't fit in with our UPS systems in there. So we ended up, we had a room in our new office location we have them in, and it has some cruising issues, which we're running into, and it'd be great to be able to actually put some real HVAC in there, but it's working out so far, but it's not a good long-term solution. And the other issue is, is we can't extend our OSL network, which is actually separate from the OSU network, into that because of the physical, how OSU has their network set up. So we run into some limitations with their networking sometimes, that I'm still trying to work out. And I can't use our IPv6 that way too, so it's kind of annoying. But hey, it's there, and it's working. We basically port forward the SSH port through our VPN switch or node, and they can get access to do whatever they want to do. We finally got our IPv6 going in 2016. We have a lot of our public services using it. Right now we're doing static assignment, based off of what our ISP has recommended. We're doing dual stack, so everything is running on IPv6 and IPv4. We basically assigned a project if they want it, a flash 56 reservation, and we just put it in the VLAN that we have set up. We also been using a lot more Prometheus and Grafana. I had a little issue with our data in the last month or so, so we're missing some data, unfortunately, trying to figure what happened there. But you can see some of the public dashboards we have visible there. We have the Node Explorer. We actually are doing SNMP on all the switches that we can have information. We're doing IPMI on some of the systems and all of that. We're hoping to get some more things going with Apache. It's also helping us track database usage and billing things that we want to do for some of those projects. It's been really, really great. We've also been doing a lot more with life encrypt. We manage that through Chef. It works really great with single hosts, but if we're dealing with failover with multiple systems, it's a little more problematic. We're primarily doing the verification through HTTP and so there's kind of some issues with that. With HAProxy, we currently have it through an NFS-based solution on the back end, but it doesn't work really well. We eventually want to have that working on our main FTP OSL site. We have HTTPS on it, but it's only with our wildcard certificate, which is not good for a lot of projects. So we're looking at refactoring how we have HAProxy set up, and then we'll be able to get that going. And then some other miscellaneous projects. We have Rancid to be able to track our switch configs. We migrated to bind finally, increase our storage capacity, and I think this is one of the last things is we can finally get all of our legacy systems done. Upgrade, open second stuff, upgrade to Chef 16, migrate systems from Genetic to Chef. Hopefully get a proper elk stack in so we get a little bit better logging and do a whole bunch of other things. Finally get on CentOS 8 and do all that fun stuff. So with that, I have just a few more minutes for questions. If anybody have any questions, please let me know. And while I'm waiting, a few more things on that previous slide. We're targeting trying to get the CentOS 6 host finally updated. I only have a few left, and I really want to get us to nine. We're having to work on a lot of updates on our Chef cookbooks to get support for that, and even upstream cookbooks that we're using with that to get that to work. Well, you're welcome to reach out to me as well if you don't have any questions right now. I know I covered a lot of content right now, but we'll be more than welcome if you have any questions like how things are set up or if you're interested, let me know. I'm on IRC and on FreeNode. If you go to pound.osl, I'm in there. I'm also in I think the conference Slack as well. You can find me there if you want to reach out to me there as well. I'm on Twitter as well. Oh, I do see. Oh, I see the questions now. Sorry, we're popping up. All right. How has COVID affected the ability of student workers to do OSL stuff? That is a great question. So, actually, most of the students have been able to do all of their work. The biggest problem we had was making sure that they had compute resources in their home locations and a good enough Internet connection to connect back to our workstations. And make sure they can get their VP and set up properly. So, all of them have been working remotely. We've been using Zoom quite a bit. Oregon State has a site license for that, so we're able to use that. So, we've been having a weekly meeting that we're able to do. And they've been able to make that meeting every time almost. And so, we're able to connect. And we can also get things going really quickly if we don't. One change I did decide to do is we've been primarily doing a lot of things over IRC over the years. And just the way the nature of being able to connect offline quite a bit, I finally caved and finally started using Slack for some of this. Even though it's not a free open-source solution, eventually you might look into something with Matrix, but it's working for now. So, we're doing a lot of our internal communication on Slack right now. We haven't really opened it up to projects yet, but that's been really helping out quite a bit to be able to communicate with students a lot better. And just being able to know when they're going to work. They're on summer break right now, so they're all working full-time, and I know where they're at. I'm pretty much, I think, oh, if you might actually open up some stuff this summer, but I'm planning on not going in there at all. Let's see here if there's anything else out here. I think it was the only main question. Thank you, Josh, for asking that question. I think I'm over my time. So, if you have any additional questions, please reach out to me, and I'll gladly answer them. Thank you for coming to my session, and I hope you enjoy the rest of your day. Thank you.