 platform. And we spent the last year or so working on kind of modernizing and streamlining some of the operations involved with operating and deploying the platform. So I'm going to take a few minutes and just talk about some of that here. Because I think it's applicable to some of the things that the people on this channel are doing with users and their own work. So if you're not familiar with the Galaby, it's a multi-tenant platform service, a science service, we like to say, to every cloud environment, meaning you can deploy it on-premise, we have a hosted version, you can also kind of mix and match, we can operate part, you can operate part, you can scale out, more or less elastically like a normal cloud application does. You can think of a Galaby like a Salesforce for science, but we're not really selling anything and there's really nothing that we're selling. So it's really just a platform that you can pick up and use and label for your own purposes. Now we just four things. Manage data, run code, as you collaborate anywhere and really connect just about anything. It's an API, a RESTful API platform designed from the ground up API first and has a lot of flexibility if you do add your own services and expand. Previously, well, really up until about a year ago, we've been deploying it as more or less a cloud-native application, but we've been handling a lot of the details ourselves. So it was true that you could deploy entirely in any cloud. We've done POCs and production deployments with a lot of different folks on all the major cloud providers as well as various slavers of open stack as well as your own self-hosted infrastructure. We collaboratively managed a few different tenants as well and that was all fine. We had Ansible automation for everything, but there were some pain points in there. So we had a whole host of settings, lots of knobs that you could twist and turn to tweak the performance and behavior to make it fit in all these different environments because as much as you try to generalize this stuff, everything turns into a snowflake at some point. So the Ansible automation became a little bit complicated. We had a ton of documentation written that every reader absolutely read from start to finish because users love doing that. But it was a rather complex configuration matrix and there's just a lot of stuff that you needed to know. If you're going to go start wading into the waters of operating and maintaining distributed systems, there's just a learning curve that you have to go through to understand basic things like, hey, I want high availability. I want to go from a single host deployment to a high availability setup. You're not going from one to two. You're going from probably one to five. And stuff like that just didn't grok well with people that hadn't been exposed to it before. So there's some of that. There was also day two operations. Once you had an instance up and running, scaling it was still a manual process. We had a lot of playbooks to help with it, but it was still something that you had to understand and know. And then lots more configure options and then monitoring and logging. Everyone has their own in-house solution that they're using for their data center. So people had to wire that stuff up independently, which meant that we couldn't just ship dashboards out of the box because, well, we didn't have every available solution and we didn't have that much manpower. So there was always a lot of friction, right? It wasn't just stand it up and run it and forget about it. It was always, hey, you probably need to think about having like a quarter FTE to operate this infrastructure for your organization. Not the worst thing, but it was still something to be aware of. So we fought it for a long time because we didn't want to replace one complicated distributed system with another one, but eventually it just became worth the squeeze. So we started the process of migrating over to Kubernetes and we spent about a year doing that. And the goals were pretty simple, right? We wanted to get rid of a lot of our homegrown automation. We wanted to pay down our technical debt. We wanted to leverage a lot of things that Kubernetes did really well and that were persistent across distros and across flavors of Kubernetes. So we wanted to switch the helm. We wanted to make sure that things would auto scale. We wanted to make sure that the system was smart enough to kind of right size itself and figure out where it needed to be and how it needed to adjust. And then we wanted to make autogrid upgrades and the insights, right, the logging metrics, alerting notifications, just the transparency of the system ship out of the box. So like I said, we spent about a year doing that. Along the way, we had to work our way through six different releases of Kubernetes, three different distributions of Kubernetes itself, different versions of MiniCube, microcated ass when we were working on Ubuntu originally and then cube spray, kind of pushing it in a lot of the edge cases there. We're deploying across five different operating systems and four different clouds as we made our way trying to figure out what was going to work and what was going to port. Turns out just because it runs in one distro doesn't mean it's necessarily going to run in another. There are a lot of grouches. We tried this really more of a journey than a destination. We really had to find out a lot more about our application itself and how it was going to behave in these situations, but also about some of the grouches that the individual cloud providers and OpenStack distros had. So we spent the last few months getting everything working on OpenStack, specifically on the JetStream cloud. And just OpenStack generally, there were some things that we just weren't really looking at initially. So there was just a lot of variation that just even between releases, and I'll let you guys know this, if you're shipping SDKs or you're shipping software on top of OpenStack, the regression testing is pretty good, but there's always stuff that jumps up and gets you. So you kind of have to stay aware of what's available and what's not. Not just in the APIs and the tooling, but in the instance of OpenStack itself, the services that are available, how they deployed, what they chose to include and what they didn't, what's GA, what's Alpha, all that stuff is under a lot of variants from site to site. So really need to be aware of that. And then we needed to understand that there was a minimum set of functionality that we could count on, and the rest was going to have to be something that we got much, much smarter about. So we started working on that for a good period of time and realized that it was better for us to scale back a lot of what we were doing rather than leveraging some of the, rather than trying to take our technical bit down to zero, that just wasn't going to be possible in a portable way. So we had to roll some of our own infrastructure. We could still roll it on top of Kubernetes, but it has some implications. So things like load balancing, DNS, your certificate, management rotation, how you're dealing with stateful applications like databases and things, your storage solutions, your monitoring of your hosts and the support services, all that stuff was things that we really had to start looking at, taking into consideration and providing for ourselves just to guarantee the quality of service that we wanted. Storage was also a big one, specifically on JetStream just because they have Manila there. Manila was not super rock solid for us. So we had to work around a lot of issues with latency, connections dropping, blowing out the number of connections that we could actually have at any given time, mounts disappearing, just really fun stuff. So we reined back our expectations a little bit and then looked at other ways, not specifically deploying a bunch of extra block storage to the hosts and then just altering the architecture to work around some of the things that we otherwise would have done if we had a distributed storage system. Let's see other things. Oh, lack of SSD right if you're running Kubernetes on really anything. Kubernetes kind of works with the assumption that there's, I mean, the services work best with the assumption that there's SSD underneath. You can do it without, but you start running into a lot of issues when you start scaling out, when you put the services under load, even when your load balance and you're properly outside of Kubernetes and then taking advantage of the local application proxy inside the cluster, you can push the APIs over fairly easily if your query is back to a CD aren't quick. So that was something that we really had to adjust and really took us away from being able to do any kind of functions as a service. Also, don't use flannel. You'll like your life a whole lot better if you just completely avoid flannel for your networking plugin in an open stack. It was horrendous for us, but that's about all I really wanted to cover here. I will say that at the end of the day, the juice was worth the squeeze, but there was just a whole lot of information experience that we gathered along the way. So if you have any questions, don't feel free to ask. You can reach us at all the usual suspects and these links will be posted in the etherpad. Thanks.