 A warm hello to our guests and the viewers today. Thank you for joining us on SuperJet TV. Can you please introduce yourselves before we begin? Sure. My name is Ryan Walner. I work for Athena Health on the DevOps engineering team, on the private cloud team. And I'm Kevin Chinaman. I'm also with Athena Health on the DevOps team building a private cloud. Great. So we're here to talk about a really interesting topic which is open stack on containers. So could you share some insights as to why you chose to go that direction with your open stack cloud? Yeah, I'll take a stab at that one. So we initially started at Athena by deploying open stack using Miranta's fuel. And we found some limitations around how the network was deployed. And so when we start looking at other options for deploying open stack, COLA came up as being an interesting option because it was basically network agnostic. You could deploy your own network in any way you want and you just put the containers on top, to run the application or to run open stack. Yeah, and I would kind of second that in the sense that we as a company at Athena Health are really moving towards containerizing a lot of things actually, not just infrastructure, but applications as well. And so we kind of have this grand vision for everything being a container and even running open stack on top of other container times and orchestration systems. Great, so those people that are watching, if someone wants to follow in your footsteps, how did you get there? So could you walk us through maybe the high level architecture and some of the steps that you took to get to a containerized deployment with COLA images? Yeah, sure, I mean, so I'll talk about kind of the logical entities of the containers themselves. So we started off really with experimenting with COLA. We were both new to the project, so we really wanted to get a sense of how it's being used to deploy, what the containers look like, what the Docker images are like, seeing how we want to modify it. So we run open stack on a bunch of Dell servers on bare metal. So we have automation around deploying those servers and we use Ansible extensively already just for getting those servers kind of ready for open stack themselves. So Ansible was a really good fit when we chose COLA. So after kind of testing with COLA for a bunch, we switched to deploying it on bare metal. And I don't know, we had a few different network architectures that we used. I think that was already there by the time I joined Athena Health. So I don't know, maybe you could talk to a little bit about that. About the network architecture? Yeah, yeah. Yeah, first we started with just plain old Layer 2 VLAN networking and that were great to start with. And then when we want to scale that up, we need to go to more of a spine and leaf fabric. And so most of the stock to players just don't support a spine and leaf network and run it through the host. So that's where we're working on using containers to do that with. Yeah, COLA gave us a lot of flexibility. So it's pretty much bare metal and containers running open stack and all our workloads for developers running on open stack. That's great. So it sounds like networking was a big part of the decision to go this route. What are some of the challenges? They could be operational, they could be just learning curve. Like what are the growing pains to moving towards the direction you moved in? Yeah, so I mean, I could talk a little bit about some of the challenges. I think those are some of the more interesting stories. We moved from a flat VLAN network to the spine and leaf architecture, as Kevin said, actually in the last few months. And we wanted to move all the routing to the host, so that's putting the fabric addresses on the loopback interfaces. And so COLA gave us a lot of flexibility there, but it also was opinionated in ways, meaning the way Ansible reads and gets IP addresses from Ansible Facts are opinionated in the sense that they didn't necessarily work with the way we put IP addresses on the loopback. And so there's a bunch of instrumental ways and challenges around just getting the fabric to work and having it on the deployment in a container. But then with COLA it was another challenge to say, let's make COLA work with this deployment. And it was actually, I think once we found out how it worked, it was kind of nice that COLA was nice and flexible. Yeah, I would kind of echo that too. We had the technical side, but then we also had the process side. So we had coming from a very siloed company, we even still have a whole networking team dedicated to the network. And so helping and bringing them on board to make sure that their vision and our vision is realized within the network as well for the infrastructure, that's been really helpful. And also a challenge to you because they're doing things in a different way than how we would do it. But we learned from each other, hopefully we can grow and become one infrastructure body. So you obviously threw a lot of technology, technological details about how you're moving in the direction you're moving. When you're starting, what were some of the ways you learned? Like could you share, did you use documents or how did you enhance your knowledge around what you're consuming? I was curious. Yeah, so the way that we kind of consumed and learned about these technologies, we did have some background in OpenSec, background in Docker. And I think in terms of using the community really helped we jumped on IRC to get in touch with the COLA community to really learn about simple questions to also harder questions to learn about how the containers were doing a certain thing where all of it being open source helps but then having people who have done this in the community already helped a lot. So those resources at least personally helped me a lot in the journey, I guess. Yeah, and the documentation. I mean, just the sheer amount of documentation that there is around the COLA project, that helped a lot. Because I mean, if you guys are digging into the code to try to figure out how to do all that stuff, it would have taken us much longer than what the documentation provided. And there's been a number of instances where we kind of come up against an issue where we can contribute back to say COLA or maybe there's an opportunity where we wanted to fork because we hit an issue where we want to do that. But I think being part of the community gives us that sense of, no, let's definitely not do that. So let's contribute back whenever we can. And we've really kind of appreciated that sense as well to give back to the community. Yeah, totally agree on that one. Awesome, so in closing, what's next? Yeah, what's next? That's a big question. I think so our OpenStack deployments have been expanding a lot at Athena Health. So we're really in that initial deployment, kind of making sure things can scale, making sure we're going to have a good upgrade plan and day two operations for OpenStack. And we fully, I think, realize that COLA is the way forward. We really like working with it. It's been really helpful to kind of expand single compute nodes, single seph nodes, sender nodes. It's been really nice to work with that. So expansion number one, and I'd say, in the future, we have container orchestration platforms like DCOS where the COLA community has already been working towards deploying OpenStack on Kubernetes. Really a very similar thing for DCOS. Right. Yeah, and then top of that, on top of the container orchestration, is just applying DevOps and CI CD process to our infrastructure. We talked about that a lot, and that's definitely on a roadmap in the next very soon. We want to get into testing, doing infrastructure as code to make sure we're writing the code rights and it's doing the right things and testing all that. Deployments and upgrades and all that good stuff. And going multi-region too in multiple data centers rather than just scanning one data center. Great, well, thank you so much. You're welcome, thanks for having us.