 My name is Jason Schmidt, I am a Solutions Architect with IngenX, and I'm here today to talk to you about a project we've been working on for about the last year that we call MARA. And what is MARA? This started with a deceptively simple ask. We're asked what is a modern application and can you help us define it? And as we started working on it, we realized that a modern application is in many ways like an iceberg. The application at the top and that's what we focus on, but there's a lot underneath that is needed to make that application work. There's monitoring, observability, management, somewhere there's a Kubernetes installation, somewhere there's actual hardware. So MARA was born and our one liner is we've created an example architecture using Kubernetes that aims to be as production ready as possible. So who are we and why do we do this? I'm part of the community and alliances team and we have two mandates. We work with the open source community and we work with our alliance partners. And in a lot of cases, there is overlap. So we work with LightStep and Rancher and JFrog and they have a foot in both camps. They have a foot on the open source side and a foot on the alliance side. Specifically to my team, we have a very diverse array of technical experience on my team. Probably one of the most technically deep teams I've worked with. And I put some of these experiences up there not to impress anybody, but more to talk about the different viewpoints we bring to every problem that crosses our desk. Anybody who's tried to get dev people and operations people and QA people and managers all together on the same page, know it can be like herding cats at times. So this is something we're all happy with and there have been lots of discussions and arguments to get to this point. So why did we build this? With all the experience we have and given the mandate to explore what a modern app was, we wanted to leverage our experiences to build a best practice example because we couldn't find one out there. We wanted a way to deploy Kubernetes infrastructure that did all the things we needed to do to support a modern app. And in looking for something to fit that bill, we found a lot of anti-patterns. Hitting materials, masquerading as technical artifacts, toy applications that really didn't do anything. My personal favorite, the long demo that doesn't actually work when you try and put it on your own machine or your own system. I'm sure we've all hit that. And then of importance to our team is to highlight open source as a core part of modern applications. Not a bolt on, not an add on, but a core part. So what's in the box? What did we put in with Mara? As you look at this, I don't think there's any surprises here. We run on the major cloud providers. We run anywhere we can have a kube config file ingested and get persistent volume support in egress IP. We've chosen to use Pulumi as our infrastructure code provider. We use the Python language and we use Pulumi because we felt it was easy to read and something simple for us to use. Observability, we're going to talk a little bit more about. Our application, we chose the bank of Anthos which we forked and called Bank of Sirius. Our desire was always to add more applications to this and today's blog post from the Hotel Foundation we do have something to look forward to because we'll be working with that to make it work with Mara. So who runs Mara? We've got some sample use cases and out there on our team, the folks that are developer heavy, they just want a way to stand up their application. They want to put their application out there and have all the plumbing taken care of. I come from an operations background. I'm interested in how we manage, monitor, deploy and support these applications. I also like to break things and see how we recover. What's our mean time to recovery? What failure modes do we induce when we break components? QA engineers on the team, they like having a reproducible test environment for benchmarking, for testing. And then the most interesting use case for me, it's labeled as startups here, but this is anybody who has a component that they want to test out in a running application framework. One of our first integrations outside of what we put in Mara was with Sumo Logic. They came to us and said they wanted to plumb their solution in. We turned some things off, put theirs in and had it running within a day. And to me, that's the strength of Mara. We can plug and play different components and see how they act within the Mara framework. So we're an open telemetry conference. How does Ohtel fit into Mara now? We're not far on this path. Right now we have an Ohtel collector that consumes traces from the Bank of Syria's application. And then we're using elk for logs and Prometheus for metrics. We visualize in Grafana, we visualize in Kibana. And the operators, we use the Ohtel operator. We have a Prometheus operator out there in elk. So this is now, and our first step was to get Mara up and actually running. Now we're at the second step where we're starting to layer Ohtel in. Where do we want to be in the future? We want Ohtel to take care of metrics, logs, and traces. We want to figure out the best way to do that and the way we can do it where people can drop in their own solution, light step, be that Yeager, Zipkin, whatever you want, drop it in there. And there's some question marks here because we really don't know what the future holds. We're kind of riding along and seeing what comes out and what works. So that all said, if we reach this point and you're thinking, man, I want to try this out, it's very simple. We have a vanity URL, nginx.com, slash Mara, clone the repo, have credentials for a supported cloud, have credentials for a K8 installation that meets minimum requirements. There's a setup script you can run that will go ahead and set all the right versions of the software we need. There's a startup script, a startup script churns for anywhere from five to 30 minutes. And at the end, you're running Mara. Once you have it up, what can you do? You can log into the Bank of Sirius application. You can generate a load with the locus load generator, which has some pre-written test plans that Google came up with for Bank of Anthos. You can explore the deployment with every tool you want. Use kube control, K9S, use the dashboard. You can use the managing and monitoring tools that we've built in. There's some convenient scripts to help with that. You can change component settings, put your own components in, and you can break things and see how it responds. So what's next? We're looking at adding some better load and performance testing. We have a project in the works called Cicada Swarm that is going to hopefully add another layer of testing with some pass-fail for CI pipelines. We're looking at migrating to the Pulumi Automation API to get rid of the bash scripts that currently start things up and tear them down. And we're looking to have better configuration management. Of interest to this group, our goal is to track and adopt hotel enhancements as they come along, extend hotel to other products within the IngenX organization and greater F5 organization, and provide feedback on our experience through blog posts, through conference talks, and things like that. So if we've gotten this far and you want to get involved, you can go clone the repo. You can raise issues. You can participate in discussions. We appreciate honest feedback, even if you're telling me that this doesn't work. We have problems with it, or you think of a better way. Everybody on this project is a lot of humility, and we know we don't know best. PRs are always accepted, and we do have a community slack that you can join. There is or will soon be a Mara-specific channel you can drop into and go ahead and talk to us. And then we're at conferences like this, so stop by and see us. I'm gone tomorrow, but I have a colleague, Dave, here who's speaking later in the week. Go bother Dave. Dave can get information to me, or just join us on Slack. Above all, this is something we did for the community. IngenX graciously allows us to spend the time to make this work for the community. So we would like this to be useful. We'd like people to share the code and Rising Tide lifts all boats, as was said earlier today. And with that, I thank you for your time.