 Welcome back to another episode of Open Info Live, the Open Infrastructure Foundation's hour-long interactive series featuring production cases, case studies, open source demos, industry conversations, and all the latest updates from our wonderful global community. We're live here on Thursday at 15UTC, streaming to YouTube, LinkedIn, and Facebook. My name is Kendall Nelson. You may recognize me from previous episodes. I'm a senior upstream developer advocate at the Open Infrastructure Foundation, and I will be your host for the day. So today, we're joined by experts from Oregon State University, Monash University, Telecom Paris, and also the OpenStack Scientific SIG to talk about how they use OpenStack in academia. So like I mentioned a second ago, we are live, which means we can be answering your questions throughout the show. So please feel free to drop comments, questions, musings, tellos, whatever you'd like into the comment section of wherever you happen to be watching this stream today, and we'll get to as many of those as we can. First off, I want to thank all of our member organizations, including StackHPC, as our silver member today, who all collaborate to make this show possible. A huge thank you to all of their support. And also as a side note, if you're interested in joining the foundation, please check out openinfra.dev slash join to learn more. Quick bit of like history background for this. Last year, the Open Infra Foundation announced an associate category of membership for foundation members to recognize and collaborate more closely with academic and public research institutions and nonprofit organizations that use, build, or sustain open infrastructure. So all of the organizations joining today are associate members, aside from StackHPC, which is a silver member, like I mentioned. So thank you to all of their supports as well to help build the foundation's mission of open infrastructure. So without further ado, let's get started today. First up, we have Stig Telfer giving an update on the scientific SIG and how they advocate for research computing and open infrastructure. I'll let you take it away, Stig. Hey, thanks, Kendall. So I guess we're here today because OpenStack is a killer app for academia. And you might wonder why. I guess it's because why does it work so well for research computing and academia? And I think if we go on to the next slide, you can see it solves a whole host of problems faced by modern academic research computing environments. And the kind of transition that these environments are going into, they're really being swept up by this sort of sea change in the way that research computing is being done today and looking forwards into the future. So I guess that the first thing that we look at is where have things been up to this point and how do we adopt and embrace these new changes instead of getting sidelined by them? So we get these ideas in modern or current conventional academic research environments about silos of compute infrastructure and where you might see dedicated hardware solutions, systems deployed for a particular purpose very compartmentalized with bespoke and dedicated software stacks. We may well see that the software environments being deployed to these systems are the ones that the admins, the cloud and the HPC admins provide for the researchers. The researchers very much get what they're given and they do what they can with it. And finally, the other drawback of modern approaches to consuming academic compute infrastructure might will be that we're driven in a kind of a ticket based system in which these very thinly spread research computing admins tend to be in the loop to affect any kind of change. And we can see how cloud and open stack and open infrastructure can really play a transformative role in solving these problems for academia. So we can bring about this kind of consolidation of many diverse compute platforms onto a common substrate of private cloud. Using that and automation and other intelligence brought in from cloud native techniques to bring about this kind of self-service environment where research computing doesn't have to be based on the flow of tickets coming through to a service desk or getting these admins to do the bidding because that really leads people spread thin. If we can self-service these things and if we can provide our users with environments that they have a greater degree of ownership, of authority over, everyone is tremendously happier. And this is really the promise that cloud is delivering for academia and research computing environments. Can we go on a slide please? But it's not a complete shoo-in, I would say. So research computing is not really the same in every situation as a sort of a conventional cloud environment. A lot of assumptions about the way that people consume cloud don't match up with the ways that conventional scientific computing is done and the way that infrastructure for scientific computing is managed. And so we have quite a lot of challenges in using OpenStack and adapting OpenStack to work well in this tremendously broad environment that is academia and research computing. It's highly diverse. And in some of the sort of the tip of the triangle where we start to look at these really high-performing supercomputer environments, a cloud can be incredibly powerful and transformative but it really needs quite a bit of expertise and adaptation to make it work well in all of these contexts. And so furthermore to that, there's a huge body of conventional wisdom about how research computing infrastructure in academia is installed and operated and managed and reconfigured. A cloud is just such a different mindset and a different skill set to all of that. And so we have, as well as the solutions, we have a great deal of problems to overcome and the great drawback to this would be that these learning curves are just enormous and if we were solving them in isolation, in many cases, they may well become insurmountable. Next slide, please. So I guess that's where the scientific sig comes in and the idea here is that it's a collaboration or a sort of a loose affiliation of different research computing institutions, members from around the world. These aren't competitors. I mean, certainly not at the admin level. I mean, there's academic competition but for the people running the clouds or creating these systems, everyone is a fellow traveler and this creates a somewhat unique and tremendously cooperative atmosphere within the scientific sig. There's no, I mean, there's no de facto opinion on the right way to do things. There's huge amount of diversity in the solutions but there's plenty of fellow travelers who have the same problems to solve, you know? And so it brings about this kind of nice rich discussion where people bring in information and generally, it's a great forum to go and find answers about how to address the problems of research computing using the solutions that OpenStack and Private Cloud can offer in return. So one of the things I would like to say is that if you're interested in the scientific sig and the way that the scientific sig solves these problems for OpenStack today, look out for the book, The Crossroads of Cloud and HPC which is freely available from the OpenStack website and has seven chapters which cover different case studies, different topics around how to solve research computing and academic challenges using open infrastructure and it's a tremendously good tool for this. So today in the age of COVID, the scientific sig is a Slack channel and it's free to join. I didn't post a link here in case it got overwhelmed with other stuff but I think if you can find us, it should be easy to reach out and we'll happily give you a link. There's one I think on the scientific sig's wiki pages on the OpenInfo website. The mostly what we do is to bring about this advocacy around the scientific sig and to talk about how to use research computing, sorry, so how to use OpenStack for research computing use cases at OpenStack conferences but also conversely at high-performance computing conferences and scientific computing. Mostly we get about through the Slack channel, we have weekly IRC meetings which tend to also go ahead but the real inspiration or the real output and the real bonus of the scientific sig is either in the PTG where we do lightning talks or at the OpenInfo summits where we tend to get together and have a few hours sharing information and talking over stuff and usually a good social occasion as well. So it's a very friendly group covering all aspects of academia and research computing and I'd be very happy to see you there. We have about 150 members which is, I mean, it's just tremendous actually to see this group of people reaching out and helping each other in this way. I think probably I should stop there and actually get onto some real academia use cases. Well, I don't know. I think that you've done a lot of work setting the groundwork for everyone else and being there as a resource for them. So I love attending the scientific sig sessions because I feel like I learned something every time not just about OpenStack but science and research in general. So do you happen to know which time slots you have for the PTG that you're signed up for for people interested in attending? I don't know at the moment. Okay. Yeah, I have to go to them. Yeah, it is a free event and it'll be the week of April. Man, I got to look at a calendar. I believe it's the week of the 11th. Numbers. Oh no, I'm wrong, April 4th. I'm looking at the wrong month, my calendar but we hope to see you there and obviously the scientific sig will be there and hopefully a few others. I know the scientific sig you collaborate somewhat closely with the large scale sig given the amount of data that you process in these research projects. Did you want to talk a little bit about that sig as well and how you work together or? So we've definitely had some good discussions around how to manage large scale infrastructure. I think in the context I'm aware of, large scale bare metal is another one. So there's a good deal of alignment with the bare metal sig as well. So managing the sort of the high performance computing resources of a university usually involves managing hundreds or potentially thousands of systems in a bare metal cloud. And so that brings about some great challenges and great benefits too. Yeah, and the other group that you collaborate with are the open stack operators obviously being mostly operators yourselves. Do you know if there are any upcoming meetings for that group at the PTG or maybe in Berlin? Ooh, not that I'm aware of. I haven't even started to think about the Berlin schedule yet. So, no, I'm sorry, I don't know yet, Kendall. It's totally okay. We'll talk a little bit more about Berlin towards the end of the show. So thank you so much for this overview. It's been great and we'll have you back towards the end for more questions. So next up, we have Steve Quinnette to talk about why Monash Research Center at Monash University uses open stack. Hello Steve. Thank you, Kendall. We might as well move on to the first slide. So I guess the question we've been asked is why use open stack in academia? And I thought perhaps the one thing I could do amongst my peers here is answer or ask the question is what's the value of open stack to a university? And for all those other university, you know, they're complicated places. I could answer simply and say I'm a cloud builder, I run a facility and so on and so forth. But what I wanted to convey through these set of slides is something about the complexity of universities and academia and why that makes sovereignty over digital things really important. And so, you know, just to make, to explain a little bit, I live in a business unit of the university that's about research. It's a research office area, a research infrastructure area. But most of my staff have, they're IT professionals, they're IT people. But we have a shared vision and drivers. And they're really these two bullet points. One is that researchers are connected. They're not really living inside the campus, inside the institution. They're connected. Everyone looks by that in the tick. And the other is that even if you just use Excel and you make a spreadsheet, you're programming something, you're creating technology yourself as a researcher. And what we're really dealing with and Stig was alluding to all these things is intrinsically researchers need advanced computing and it'd be pushing the boundary. And I'll talk about that as well. And so our job in the university is to consolidate those styles of research needs and do that in a manner that delivers the greatest impact to those researchers in the universities and mankind. But we're a cash constraint. It's with the dollars that we actually have. Next slide, thanks. And so just to talk about this connected world angle and I always point people to the website below. It's research.onash.edu. And there's some key stats at the very top. And that is over the last five years we've had in the order of 5,000 researchers and 26,000 projects. And if you look at that map, it points to 60,000 connections around the world. And the challenge that we face as universities is, well, it's easy to go and have an IT process to control everything and everything's a form. And that's hard to do when you've got 20,000 projects, 80,000 connections around the world, 5,000 researchers. So we need to have a scale up and not impact the rate of those connected people. So this world is intrinsically connected. And they bring their own technologies along into that collaboration. So if we can move on to the next slide. What we started to realize, and this is in the order of 10 years ago, was we needed to enable researchers to work on projects together as they emerge wherever they want to take it. And so we created this federated research cloud. It's called the Nectar Research Cloud in Australia. It's now hosted or funded by, I think, all the Australian Research Data Commons, who are also members of the Stack Foundation. And we did something really innovative. We made it really easy for researchers to get a resource anywhere in Australia on any one of these nodes. But we did things like centralize some of the things that we need to do. Then my group, they got involved with banks, researchers doing stuff with banks. And we needed to secure the information we got from, the sensitive information and prove we can do those things. And what we noticed are that today, that still is a culture of data privacy, of data classification, asset identification and safe havens or enclaves or whatever your favorite word is. Processes that make sure that people keep their promises. We've had a lot of researchers and then we were very early on and we pushed the OpenStack community to get GPUs working in OpenStack and then work when we first started. And that was really important because most research is visual when they need to interact with things. And so we ended up with communities creating their own platforms, either VDI based and now very much Kubernetes and everything based. And so that bringing GPUs into the foray and creating helping research groups create those platforms is really, really important. We also got involved with the HPC community and now we have a HPC built on top of our OpenStack and we've had that for a very long time by being the first to bring HPC Stundex into the cloud space. And then more recent times, I've been working a lot with NVIDIA, we're in around GPUs and SmartNex and looking at how we encrypt and do all these. Whatever your size shows or CIO's favorite security tools, offload them so that the researcher doesn't have to do them, they get done in the background but also they don't lose resource. The security is not taking cycles away from them. So these are all things that we're doing to consolidate the needs of the researchers and that we need this language that lets us pull everything and be efficient. And that's where really OpenStack comes in. So just on my final slide, thanks. So just if I could tease that out a little bit, you know, one of the things that we're noticing is that research priorities across the globe, they're trending towards society and translation. And you know, you can sort of see that as the idea of the lab is sort of trending away from being a room where there's a microscope to being the whole campus, the university, the being of a university. And so there's this sovereign idea of needing to own how technology works, right? And here's a really anecdotal example. It's come up recently, a local cloud provider who claims to be net zero aspiring very closely, you know, really, really good stuff. And they say, oh, they're 25% cheaper than the hyperscalers. Okay, well, that's interesting. That's good. At the scale that we do things across all the loads, we were in the order of five times cheaper than those hyperscalers per workload. And that's been that way since 2014. But what's really interesting, as soon as A100, the Rackful Laber 100s or something GPUs get involved, that ratio comes down to that three times. And why is that? And that's because the energy density is so much higher and our power bill goes up so much further. Operating expense is much further. And we've learned, I think, where the university is really interested in being, and it's got a target to be net zero by 23. So there's a tech problem here, but it's actually a research culture problem. If one of the advantages of OpenStack and owning the whole stack all the way down is we can start to tackle this problem. We can look at how does the university get the greatest value, keep the high rate of, you know, cheapness or dollars per capacity, but also go net zero because we could be working with the technology providers to do that and all sorts of bits and pieces. So I'll just leave you with that fruit for four. And, you know, the value of OpenStack is many things. It's not just technology. It gives us the ability to be a university. That's it, Kendall. Awesome. Well, thank you so much for all of that. And also thank you for being awake in the middle of the night because I know it's not an excellent time zone translation for you right now, but yeah, thank you for being here. So I have like two little questions. One, or a question and a comment. One question you had mentioned like GPUs. What sort of work did like Monash University or you specifically do to help like drive those efforts of getting that implemented in OpenStack? So back then, I think it would be fair to say we were too afraid or something or anxious to get too much into the OpenStack code and we very much aligned ourselves with the user community and brought use cases and together with the vendors and OpenStack help drive and validate the code that was going on back then. So it was a very user centric contribution that we made back then and then probably just benefited from being one of the first public entities that could publicly say, hey, we're doing this. Yeah, yeah, even that sort of contribution is huge and super helpful for the people that are working upstream because like you don't know as a developer all the ways that the code is being used. So any feedback like that from users and operators is so helpful, which is why we have things like the forum at the OpenStack Summit or Open Infrastructure Summit that's coming up in Berlin. So the other kind of follow-up question that I have, I know in the more recent releases, VGPUs have been a thing that have been implemented in Cyborg and Nova. Have you been tracking that, interested in that? Is that going to help at all? Yeah, definitely. And ARDC has to help drive and push that along, correct that step change, be the market failure fixer. ARDC has come in and funded for capacity for dense GPUs and virtualizing it and really helping us getting us to that point where we'd get GPUs through Kubernetes at scale as well and all those bits and pieces. So yeah, definitely. And all those things, the same things occur. We've got applications that we help drive and test and validate those things. And the ARDC folk connector, there's a group called Core Services and several of those people there are active contributors into Nova and other things like that as well. Awesome, cool. Well, thank you so much again. I know one other side note. There has been like a group of people talking about forming a SIG more like the foundation level rather than open stack specifically around like net zero sort of things and like being more aware of the impacts on the environment of these huge clusters people have and that sort of thing. So hopefully Berlin will have something forming actually concretely. There was an article from the BBC recently that went out about how they track their emissions and stuff. Maybe there's some collaboration there. Yeah, that's brilliant, I'd love to talk to them. Awesome, cool. Well, thank you so much. We'll bring you back on in a little bit. So I have had the pleasure of working with our next guest for a little while now. Remy and Mark at Telecom Paris are here today to introduce what their open stack infrastructure looks like at Telecom Paris. And some examples of the research projects they're using open stack with. Take it away. Hi Kendall, thanks for the invitation and Mark is here also. So I am a subject professor at Telecom Paris. Mark with me, can you present yourself? Yep, I'm a research engineer at Telecom Paris and the network's research group, I think. So maybe we can go to the next slide that presents the school. So Telecom Paris is a public institution in France. It was founded in 1878, that's quite a long time ago. Well, it depends because we have older universities, but still. And one funny anecdote is that the director of the school in 1904, I think, back then it was Edouard Estonnier. He's the guy that invented the word telecommunication. I think it's a funny anecdote that makes the history of the school. So it's a school of computer science and telecommunication. Also a member of big French institutions is like a network of computing science schools distributed around France, especially in the Institute of Telecom and also in the Institute of Polytechnique de Paris. And we in our school have around 1,600 students and four departments, research labs, the first in computer science and networks, second in electronics, communication and then image data signal and social sciences and economics also. I have to mention also that Mark created a small center, COSY, Center for Open Software Innovation. And we have, Mark is also the maintainer of Inkscape, a very big open source project that maybe you know. And we also have the founder of Software Heritage, one of the biggest source code hosting facilities, archive, I would say. All right, let's go to the next slide to see the infrastructure we have with Mark. Okay, so I will quickly present our OpenStack infrastructure that we have in our department. It's a fairly modest infrastructure but sufficient for our needs. We have around 10 compute servers and more recently we created three compute with GPUs that allowed to spawn VMs with GPU. We don't really use virtual GPUs. The way we do them is with PCI path through. So we have direct access to the GPU with that method which might be more efficient but is also easier to set up which is why we did it basically. Well, the way we work, we give one account for one project per research group so that research group can share the VMs and share the networks and allow to create networks, VMs almost free basically. So we end up with almost 80 networks in our cluster and we also have like lots of space for storing objects and storing images and snapshots so that everyone can without too much restrictions to backups and snapshots and store whatever they want in the cluster. And more recently we also added support for the Octavia and Magnum projects in our infra which shows people to create Kubernetes cluster. And we are soon in the process of updating to the Xenar version so that Magnum works better in that version from all tests. And all our physical infrastructure is backed up by high-spec networking, at least 10 gigabytes depending on the servers. And next slide please. Maybe I will go through the little services we have. So in our department we use this OpenStack. So we started the OpenStack I think a few years ago. Maybe it was seven years ago, I'm not sure, by aggregating servers and then installing OpenStack. And some of the services we have are used for the faculty like Next Cloud is the alternative of Google Drive for example, or OneDrive. We have plugins like only Office that is almost the same as Google Doc or Google Sheets. We also have courses, online courses on Moodle, an equivalent of Overleaf, ShareLatech, which is I think it's the open source code of Overleaf to collaborate on latech documents. We do have a Git lab also, PeerTube for videos almost like YouTube, Big Moobadon for web conferencing. We use also Matrix a lot with some of the projects, Matrix and Element for chatting. So these are examples of services given installed on the OpenStack given to the faculty. Some of them are partially open to the partners outside of the school also. One argument to have everything on site on our own servers is because of the European GDPR. So this is the general data protection and regulation in Europe. So every European citizen has the right to know who is processing the personal data and also to who, to which actor the data is given for what reason. And there is also a responsibility of the actors that process these data to notify the users why they do that for what reason. And we also have in Europe control authorities that have the power to regulate all that and to sanction some of the big companies that don't want to be precise on what they do with the personal data. So this is one very important reason I think to have local services. We are a public institution with public money. As you can see also maybe with Mark behind him, public money equals public code and public data must be protected, personal data must be protected. Also, every time there is a good cloud service we try to find an open source alternative and we try to install it on our servers, test it and if it is good maybe open it to the faculty so that we avoid vendor locking for licenses. And if you collect a lot of licenses it could be very expensive and we are a small school. So we don't want to spend so much money on big licenses for software. And if there is an open source alternative it's a good way to give the service. Also the sovereignty of the nation is one aspect. We do use also this open stock for research projects. I give a few examples here for future routing architecture for the internet or for SDN, software defined networks or network function virtualization. And also I know that one of my colleagues is working low latency networks, 5G and IoT. And I do know that open intra has a project on low latency networks. You were talking about net zero of course grid networking is a hot topic now I would say in research. And we can go to the next slide. So we have several ways of using open stocks. So as Remy mentioned we can use it to host services that we provide to the faculty but we must mention that VMs are actually a very good tool for students. Because if you give a VM to someone they can be rude on it and install anything. So basically they can tinker and try things without any fear of having like huge steps of reinstallation or other things. Because if students contact me saying I crashed my VM I can, can I have a new one? I just do open stack rebuild VM and now open a server rebuild VM and then one minute after they have a fresh VM and a fresh environment, which is very convenient. And the networking aspects also allows us to teach stuff about network environments like Hadoop or Spark projects. And just we can, what we did for a specific course is that we spawned one VM per students in the same small sub network. And so everyone could locally install Hadoop and locally install whatever services they wanted. And they had, they were in the same network so they could do parallel processing within their network. And that was a fantastic tool for us, which allows us to basically get rid of using public clouds for that, which we did in past years for instance. So by having everything on premises we were more independent and more free to do whatever we wanted without getting in the subjects of pricing and limitations of some public clouds that were out of what we wanted to teach. We also teach OpenStack itself because it's a private cloud environment that all students in computer science might be interested in learning and also might encounter further in their tech career. So we use DevStack for that. So we have actually VMs in our OpenStack clusters that spawn DevStack projects with an upper stack environment which they can administrate, which is kind of meta and cool. And so they, in that project they can learn the architecture of OpenStack and the components of OpenStack so that they know what is responsible for what in OpenStack. So they can create VMs and use blocks and et cetera and just deploy OpenStack in general. So if I would say there are three points to get from our usage of OpenStack, it's, I would say that OpenStack is a great way to provide services to both students and the community within our academic environments while keeping a total control of the infrastructure. We host everything and we provide everything to students and faculty which means we control all the data and all the access. So it's very convenient for, as Remy mentioned, data privacy because we host everything and our few clusters admins have access to everything. On the administration side, the administration and setup of OpenStack cluster I think are fairly easy and quite viable at our medium scale of university which is not a huge scale of public load and not a small scale of one or two servers which is very easy for us. And also with lots of sub-projects of the OpenStack we find it is versatile enough to be adapted to every need we encounter so far from the teaching and tinkering context of students to the production context of having, for instance, a middle on it and that's it for us. Thanks. Kendall, I think your mic is off still and maybe some connection problem. Off. Yeah, you're back. Yeah. No? Doing things live is great. It's awesome. I love technology when it works really. So I was actually wondering what version of OpenStack you're running and how you handle upgrades, how often you do them for your cluster? We're currently in the process of updating to Xenna which we should do in probably two weeks and we try to do them in low activity times so not exactly when they get published but quite soon after that and we stay on open to LTS servers for the servers that host OpenStack so we update them every two years basically and restart them every two years. We find that sometimes it's like we have a disruption on services especially on the network side during the upgrades but mostly when it lasts less than a few how worse people don't really complain if we do that when they are not actively using services. So we try to keep the disruption minimal and whenever possible we upgrade gradually the servers and do live migration of the critical services to servers that are not yet... We don't upgrade everything at the same time when we can. Yeah, minimal disruption especially when you're doing large scale research processing of data, I expect is very important. We try to warn researchers that have courses running on the clusters so that they know that if they provide one VM per students then we won't live migrate everything we would just live migrate the critical ones and that's it. Yeah, that makes sense. Cool, well it sounds like telecom paris has got a great like OpenStack situation both using it and teaching it so I think we'll move on to our last guest today that I've also been working with pretty closely so Lance Albertson and actually a couple other students at Oregon State University. So Lance is here today to explain what the OpenStack architecture at Oregon State University open source lab looks like and why they chose OpenStack. All right, thank you. Yeah, so yeah, my name is Lance Albertson I'm the director of the open source lab at Oregon State University. First off, I kind of give a, you know talk about us a little bit and you kind of think of us as like a free and open source software hosting company. We've been around since about 2003 so well before public clouds even existed. It kind of started within the university IT units with some connections with various open source projects and it kind of grew from there. But basically we're part of the College of Engineering currently and I kind of treat us as like a startup at the university. We are a part of the university but we're also part of the nonprofit side of the university as well. We offer free and low-cost hosting services for a lot of open source projects. We also offer a lot of co-location services and part of this calls we offer virtual machines and other cloud services as well. One other unique thing that we do is we provide access to a lot of different kinds of CPU architectures. So PowerPC, ARM64 and more recently we actually started hosting some RISC-5 systems on some development boards actually in the data center, so that's been fun. We also do a lot of software distribution and mirroring for a lot of projects. You might have actually used some of our mirrors throughout the years. The other part that kind of mentioned is that we have undergraduate students that work for me and they gain real-world experience on all of these production systems. So they'll get access to managing all of these things, interacting with the clients, the open source projects, making sure their services are running and everything. And a lot of the past graduates of our program are actually pretty wide spread throughout the industry and actually one of the co-founders of CoreOS was one of our first students. So a lot of cool folks that have gone through our program over the years. So right now it's just me as the full timer and I've got about six to 10 undergrad students that work for me right now. Next slide please. So how do we use OpenSec? So the main thing is we do is we want to provide compute resources to open source projects. Most of the times they just need VMs to run, some services that they need to do, usually some kind of web service. More recently a lot of projects want to be able to run some dedicated continuous integration runners, whether it integrates with GitLab or GitHub or Jenkins or whatever type of CI system you use. They just want to be able to do that instead of using whatever shared resources they have. And so a lot actually, before we even got into OpenSec, we still use today another project called Gennetti to manage a lot of VMs. We use that more kind of our project or kind of our VMs that we want to keep around. But OpenSec really provided that API driven approach that we needed. So that's why we've been using OpenSec and I think we started using OpenSec in about 2013 and are still using it today. So right now some unique things that we do is we provide, how we use it is that we can use OpenSec to provide easy access to the power architecture and ARM64. So we've partnered with IBM on the power side and Ampere computing with the ARM64 systems. This allows us to easily deploy VMs and provide native access on the architectures or at least near native access. It has really, really been important for those ecosystems because some of these systems are, at least on the power side, they're really expensive, they're kind of a pain to maintain. And so we kind of take that off of everybody and we can just say, you need access to this, we'll spin up the VM really quick. And on the ARM side is the same thing, we actually have servers that are in the rack and they're not just Raspberry Pi's running somewhere randomly. So they're nice, great work, great, great systems. And as I mentioned before, these are primarily used for a lot of CI runners and so forth. So a lot of the projects that you see out there, if you see some kind of CI job running on power or ARM, it might be running on our cloud. We have it out there quite a bit. Or a lot of the software, the binary builds that you see, maybe even Docker or some other things, it might be actually built on our OpenSec cluster, which is pretty cool. Some other use cases is just debugging architecture-specific issues that may happen. Whether it's some weird bug that only happens because of that CPU architecture, we could easily spin up a VM. We've had several kernel developers over the years kind of work through that and kind of figure those things out. And I guess I already mentioned about the software binary builders. So we're used quite a bit for a lot of things and it's been quite interesting, excellent, please. So our OpenSec infrastructure, we're currently a little behind on our releases. We're currently on Stein, I'm getting pretty close to getting it up to train. We're a CentOS shop primarily and our plan is to move to CentOS Stream 8 and beyond. We use Ceph as our storage backend. We actually have two separate clusters. We were able to get, we have one cluster that is for the x86 and ARM systems. And then we have one dedicated one for the PowerPC since we got it directly donated from IBM for that. And I think between those two clusters, I think the raw data storage capacity we have is about 200 terabytes on each of those. So we're continuing to upgrade those as we can. We're a chef shop for how we manage all of our systems with configuration management. And so we use the OpenSec Chef cookbooks quite a bit. And so that makes things interesting. And then one other thing that's kind of been interesting to deal with is we actually separated all of the various architectures into their own clusters. Just to kind of simplify the management plan for a lot of this, that may change down the road to kind of simplify authentication and everything related to that. But that way we can just have everything dedicated. We have no confusion. With x86, we have a variety of hardware that's running on just 12 hypervisor nodes. We can easily expand that on the PowerPC side. We started with Power8 systems. Actually, we got started before Power8 was officially released. We got it on a Power7 plus machine that was acting like Power8. So that was fun getting that going. And getting a lot of OpenSec in the early days or at least in that timeframe, the 2013-2015 timeframe, it was a lot of work to get OpenSec to work on Power at that time. And I contributed quite a bit to kind of get some of that stuff fixed. But as it stands now, we have half running Power8 and half running on Power9. We're using host aggregate groups to kind of separate control between those two. And then we just have a flavor that's either P9 or P8. So when people ask for, hey, I want P9, we just say, here, go on the P9 flavor and I'll get deployed on there. We also have two dedicated nodes that have local NVMe storage. We have some projects that we're really needed to have some more high performance IO. And so we have two dedicated systems for that that aren't connected to the stuff back in. And so that was interesting kind of figuring that all out. On the ARM64 side, it's a fairly similar setup. We have one controller. At one point, we're gonna probably, oh yeah, sorry, I'm not looking at my questions over here. Let's see, what was the question? Yeah, we only use stuff right now for our storage right now. So we don't use Gluster right now. And that's, we actually, when we started this a long time ago, we started with only local storage and then we had to go through a whole migration process of moving our active VMs around local storage over to Ceph, which was very interesting. And we've had some issues with performance on Ceph a little bit but I think we've kind of worked out a lot of it. A lot of it is just needing to get it on SSDs and kind of expanding that a little bit but it's been working pretty well. But, oh, back to the ARM side of it. Actually, one cool thing about the ARM cluster we're currently in the node pool for OpenDev. So if you're using Zool to do some of your ARM jobs some of those jobs actually are going to our OpenStack and since running here on the ARM side. So that's pretty cool. Next slide please. So why do we use OpenStack? Well, being in the open source lab, we're very, we want to use open source software as much as we can. And the nice thing about OpenStack is that it's open source and it has a very large supportive community. We can easily contribute back if needed. As it stands now, I'm actually in a PDF of the OpenStack Chef project to me so that I can help maintain that. We're a little behind on getting on the releases but it's still pretty active. I'm pretty active in the Chef community itself as well. So we contribute back to the open source community quite a bit. And the other nice thing about OpenStack is it provides the API driven access to all the infrastructure resources. So some of our projects may use Terraform or maybe use Heat or some other method of being able to easily spin up and spin down some of their infrastructure. We're actually looking at possibly getting some of the GitLab automation pieces in there so we can just get that rolling as well for a lot of folks. And the nice thing is is now that a lot of the hard work has been put in place, it works really well in a variety of platforms and architectures and it's gotten really stable right now as well. It's been pretty easy to kind of manage it throughout. And I think that's my last slide. I can remember if I have one more or not. I think that was your last one. Yeah, that's the last one. Well, yeah, I love that stability is on there because a lot of people, you know, back in the day were like, OpenStack, it's so big and complicated and not stable. Well, it is now and we're here to stay obviously with all these recent institutions using OpenStack. Yeah, so yeah, OpenStack is now and not in the past. It's here to stay for sure. So man, I totally had a question like a second ago and I should have written it down because it just disappeared from my brain. Oh, I remember. So you've talked about all of these different architectures that you run within the OSL. Are you, is the OSL or are you specifically involved in the multi-arch SIG that OpenStack has? No, but we probably should. Well, I'm not sure if they're planning on meeting at the PTG, but if not, hopefully in Berlin, there'll be conversations about that. It's funny how all of these different SIGs kind of like overlap and touch. You have like the large scale SIG and the bare metal SIG and the scientific SIG and then the multi-arch SIG and like it's a lot of the same people but working on different things and having different use cases, just collaborating. It's so good. I love it. Yeah, speaking of collaboration, we are actually working with another unit on campus that does a lot of the biocomputing stuff. And we're gonna be actually probably using the ironic system quite a bit more to provide access to some folks on some of the power infrastructure and the GPU stuff as well. So it's gonna be expanding quite a bit. Awesome. Yeah, very cool. So I know you mentioned you were behind a little bit in terms of releases. So Chef, I'm not as familiar with like that one for deployment. I know like there's OpenStack Ansible and Chef and Charms and all these different tools for orchestration and that sort of thing. But how are you planning on doing the upgrade from, you said Stein is where you're at right now? So there are a couple of releases obviously. So you gotta catch up. I do incremental releases basically. So I go through each one release and then I actually build OpenStack on OpenStack and I'll do a test migration within that. So one thing I forgot to mention is that we use OpenStack for testing our infrastructure quite a bit. So we write our configuration management in Chef and we have a tool within Chef called Test Kitchen that connects to OpenStack to spin up VMs and then we can run Chef and then we can run another tool called Inspect to test and verify everything's there. So I do that with OpenStack. I could spin up a whole test infrastructure on the old system and then I could flip things over to the new one and then I can see how does that work. And then if I do things correctly, I can run multiple systems and do multi-nodes so I can test the interactions between it which gets interesting. I would say that's like OpenStack exception and kind of like hurts my brain a little bit to think about but if you can keep it straight. Yeah, I can keep it straight, yeah. Yeah, I have a script that basically does the upgrade and so it simplifies it a little bit. I basically I upgrade the controller nodes first and I'll let everybody know that the control plane is down, all the VMs are still running and then after I get that upgraded I go through all the hypervisors and all of those just get upgraded. The fun part with this next upgrade is gonna be currently on CentOS 7 and then that's the release that we can switch over to 8 and so we'll have to reinstall all of the nodes from 7 to 8 so we'll do some migrations with that and I have a separate controller node that I'll be setting up for that as well so we're migrating to some bigger systems on the controller nodes because we're running into some performance issues on that. Yeah, yeah, getting to the newer releases should help with some of that. Yeah, we were actually running into some issues where one of the projects, actually I think it was MSIS 2 which provides Windows based open source software tools and so forth, a tool chain. They're wanting to run ARM 64 Windows and with that current version of everything we have on there with KVM and all that, it just wasn't ready. It's like, okay, that's a bigger push for us to get us there. So we're starting to run into issues like that. Yeah, yeah, awesome. Well, thank you so much for all of the information. I think we'll bring all of our speakers on now so we'll have the full list of all of people that we've participated with today, Remy and Steve and Mark. Yeah, beautiful. Here you go, academia and open stack right here, the faces. So I just wanna remind the audience if you're watching us live right now, please throw questions or comments into wherever you happen to be watching this live stream. Otherwise, we have a couple of other questions set up. So first one, how does open stack allow you to solve problems with educational institutions needing access to newer or more advanced resources? And I have this question for basically everyone. So I don't know who wants to go first. Steve, would you like to go first? We'll wake you back up since it's the middle of the night for you. Sure, sure. I'll answer before I fall asleep. Yeah. Yeah, for us, the way we, the answer is yes. And the way we do it is we have an evergreening program and so we call it an incremental refresh. And so every year we're sort of removing some of the fleet and putting in a new fleet. And that gives us that chance to ask the researchers or they might have money themselves to what equipment they really need. And so that's how we make sure we're just perpetually always getting new equipment in and those new resources in place. The really interesting dynamic we have at the moment is our OPEX is quite stable other than that energy electricity problem, which comes back to the topic I brought up in the talk. Did anybody else wanna answer the question too? Yeah, maybe. I would like to suggest all academics to prefer teaching open stack rather than Amazon web services, Google Cloud, Microsoft, Azure, I don't know what. Because it's open source, you can look at the source code to learn how it works. Yeah. With others, you cannot. Yeah. You have a lot more fine tune control over every aspect of your cloud because it is open source and you can deploy it however you want. Yeah. I support this. I love working with universities, obviously. Telephone Paris, I work with Lance at Oregon State University. I also work with a professor and students at North Dakota State University every year and also Boston University as well. So more universities. I think if I can add something here, I think the, so in terms of new hardware or new advanced resources, I think that the easy one is in the software domain. And I think open stack really enables us to innovate in research computing, in the software platforms that we're using. And when we start to think in this more sort of cloud native way, we get this much more rapid progression in academia for the kind of software that we're using. We do, I mean, that isn't to say that we can't use open stack for access to exotic hardware as well. But quite often when we do that, we get a resource that is contended. Everyone wants to pile onto the new kit. And so therefore we get this really interesting and fairly novel problem in cloud, which is there is a finite size to the infrastructure that we are exposing and making available. You know, I guess the public clouds that they'd like to present this illusion of infinity. And it's kind of deceptive, but that's in a research computing environment, we're very much the other way around. We want to maximize the use of a very finite resource. And that brings about a whole lot of innovation in terms of how do we share that? How do we manage this contention and the sharing? And there's a whole other discussion really, but it's something that within our team, we call it the coral reef cloud. If you think of like the corals on a seabed and how they compete for space. And it brings in a lot of interesting discussions around reservation and preemption and just sort of pushing people out of the way with higher priority. It's an interesting place to be in. That's very interesting because I remember last week, Mark, I asked you, well, I exploded my quota for the project. Mark, could you add more instances, for example, or? But indeed, I don't know. Yeah, that's a very good question, especially for net zero. We had this problem really early on where people were given rights, allocations, and they would hoard them. They would sit them there and save them for a rainy day. They weren't doing anything. And so we ended up having to deal with this people problem type of thing and made it so every reservation, sorry, allocation only last year. So the things are solid services that run for 10 years, right? They're fine, but in principle, there's this idea that resources are fine and you haven't for a period of time. And if you don't use them well, it may change them. Yeah, if you don't use it, you lose it. Yeah, well, unfortunately, we are out of time for today, but this was such a good discussion. I'm so happy that you all were able to join us. Thank you so much for sharing your insights and wisdom and all of this inspiration about research being done and collaboration between universities and all kinds of organizations that's so good. Gives me the warm fuzzies. So as you hopefully already know, we are bringing the Open Infra Summit back this year and are also headed to Berlin in Germany, obviously, June 7th through 9th, and the summit schedule will be available next Tuesday on March 8th, so make sure to look at what exciting sessions we have planned and don't forget to get your tickets before prices will increase on March 18th, so many numbers. And speaking of the summit, we'll have another great episode of Open Infra Live coming up on March 17th to talk about some of the 5G, NFV, and also Edge sessions that you definitely don't wanna miss at the Open Infra Summit in Berlin. So you have the 17th, 18th, and March 8th as well, lots of dates. Also, note that due to daylight savings time changes, we'll start all future Open Infra Live episodes at 14 UTC instead of 15 UTC. Unfortunately, the world is not flat, so we have to deal with these silly things like time zones, especially for poor Steve. Also, remember that if you have an idea for the show, we wanna hear from you. Submit your ideas to ideas.openinfra.live, and maybe we'll see you on a future episode. Thanks again, everyone, on today's panel, we are Open Infra. Thank you. Thank you.