 Hi everybody. I'm Krunos Lapalić. I work here with Redhead. I've been here for almost a year now. I'm originally from Croatia where I worked as a Python developer, but I really wanted to get into the continuous integration and delivery and came here to work. And since then I've been working with a lot of tools and libraries which help us in our work with CI. But one that really stood out and that's really helped me a lot, which I've been using pretty much daily, is the continuous infra-environment setup or contra-envy setup in short. Well, since I've been using it a lot and also contributing as much as I could over these times. And since you've already seen it in action with Ari, I thought about putting together a little nice presentation to kind of introduce it and show off what it can do. So what is contra-envy setup? Well, to answer that, I'll first go back to why it was made. We actually made it to actually replicate existing OpenShift instances so we can have it in the local environment, run it, experiment with it, redeploy it however we want and try out different components. Now since then, since we made it to be as generic and flexible as possible, it's actually, we're actually able to support a platter of different environments and usages. In short, it's a generic way to pretty much automate the setup of a CI or CD environment. It can be deployed on a MiniShift instance, that's the original intent to have your OpenShift locally or an existing OpenShift instance. Also, with the newer versions, we can actually deploy it externally on an external VM, be it AVS or OpenStack or whatever and have our little OpenShift cluster up there. Now, it's obviously used for setting up CI, CD environments. The key note to take from there is that we're deploying infrastructure as code via S2I templates with OpenShift or MiniShift and also deploying Jankai CI pipelines from pipeline as code solutions such as HDSL, which I already talked about, or deploying shared libraries or Jenkins files and Jenkins templates so we can set up our jobs directly from code. As you can see, everything is meant to be in code, so we can test it, improve it, version it and incrementally work on it and have an oversight over it. Now, since Contraini setup can be set up in a variety of ways, the users can also be pretty various from people who are just starting with CI, CD pipelines and just want to see what it's all about. It's great because there are already working examples. You just run it and you already have a setup, a nice little environment which you can play around with and use as a basis for your next steps. To developers who want to be able to actually develop their CI environments on their local machines or on something external so they can share it with their colleagues and kind of ping-pong their ideas between them, two teams that pretty much want to automate their setup of testing or production or staging environments or whatever they need. So it's pretty much can provide all of those benefits all across the board. Now, the Contraini setup is, in its essence, an unstable playbook or pretty much it has a central playbook that includes a variety of roles since it's so modular we can use them in many different ways, which means including some roles and not including some roles as you saw with Ari. He, for example, didn't set up the prerequisites because you already have them, but he set up the mini shift and the pipelines and stuff like that. Also, one nice thing about it that we can actually include our own playbooks at the end so we can, with hooks, which pretty much means that we can set up our own playbooks, for example, for checking it if everything is up, adding our own stuff after everything is already running, running some jobs, or just collecting some data if we need it. Now, it's the pretty much main point of the Contraini setup is to set up a local or external OpenShift instance with all the infrastructure that we need. For that, we utilize the OpenShift S2 source to image templates to create resources such as services, routes, of course, images, deployments, build configs, and stuff like that. Also, the added flexibility that we bring to the table is the ability to actually customize those S2I templates. So you can use your templates in a variety of ways for different uses. So, for example, with Jinja 2 templating, you can actually inject your Ansible variables that you define on the beginning to have a different version of your own environment for different uses with minimal effort and minimal change. There's no need for duplication. Also, the final product is, in our case, and in general usage that we tend to use them is to create a Jenkins instance from those S2I templates. Now, that's not a must as pretty much nothing else is, but it's a general usage and it tends to work that way. So, we are able to deploy Jenkins 2.0 pipelines from code, whichever way you choose. So, we support the Contra-high-level DSL templates. So, those are pretty much very easy to configure ML files, as Ari already mentioned. Now, this is pretty much a general overview of the components that actually get run. Since, as I mentioned, you can pretty much skip any of these. These are the three general usage cases that we tend to see the most. So the first one is a local one. When you want to set it up all on your local machine and run it from there. The second one is when you have an existing VM or on OpenStack AVS or even local VM, if you want it. You can prepare it with that, deploy the OpenShift templates and set up the pipelines. With OpenShift, of course, it's the easiest one. You just use the existing cluster, get there and go from there with the templates and the pipelines. Now, I'll go into a bit more detail with the first example. It pretty much covers everything. Well, when we are deploying to the local machine, we set up prerequisites, which means we set up the nested virtualization, which is key for us to be able to actually run Minishift. We install LiveWord if we need and other packages that are required. After that, we set up Minishift VM. That VM can be customized to your needs. You can set your memory size and stuff like that. You can have your different profiles. You can run many Minishift instances if you need at the same time. Also, after that, we process the OpenShift templates so we can modify them if you need some custom variables injected, process them, prepare them, of course, verify them, and then start deploying them. You start building images. You start preparing the deployments, the build configs, and everything. After that, that's, of course, optional. You set up the Jenkins CI pipelines. It's either HDSL or regular pipelines and include the shared libraries if you need. Now, most of you already saw this at our presentation, but I wanted to illustrate that Conjainv setup is not alone. It's a part of its ecosystem. We have, of course, the Conjainv setup. Then we have in blue the Conjainv setup sample project, which is there to provide either beginners or people who want to see how the best practices or HDSL is used. It's all there. You can see the shared libraries and everything. Then we have the infrastructure that are expanded a lot. It can be used or can be skipped if you need it. You can run it alongside your own project, the yellow one. You can use all of those templates and containers as part of your project where you modify them to your needs. It's pretty much very customizable and flexible. Now, the infrastructure, by default, calls a couple of different templates. The main one is for the Jenkins masters and slaves. They are already pre-packaged with a lot of plugins that we use, the libraries, and the best practices. Also, we have the co-helper containers for HDSL. HDSL pretty much expects us to have those containers spun up when we are running the pipelines based on it. It's Ansible and Linchpin. Linchpin is used for Ansible. It's pretty much obvious. But Linchpin is also one in-house developed product, that tool that allows us to provision the resources for pretty much anything from Beaker, from AVS, or whatever you need. Also, there is the metrics and reporting containers. It's pretty much included in FluxDB that collects the metrics and Grafana that shows them. Of course, there are container tools with Podman and Builda already there, so you can actually develop your own containers much more easily. Now, the Contra Environment sample project is not there to actually run your pipelines, but it's very nice to see how it's done. Along with HDSL, which we already talked about, there are already examples of using the shared libraries that we also use that are kind of the middle ground between pure Groovy and Jenkins files that can be a lot of lines of code. It's pretty much means that we provide the libraries and functions and functionalities, which are already baked in with the best practices, so you don't have to worry about it and use much less lines in your code and you don't have to repeat yourself so often. Now, since all of this is written in code, the next logical step is to actually set up a CI CD for this whole course. Here we have a sort of a nested thing because we actually use Contra Environment setup to test all of these repositories, including the Contra Environment setup repository. The kind of nice thing about Contra Environment setup that it actually can be run from within a specialized container. This is not meant for general usage, but for testing it itself, it's kind of neat that we are able to actually run it within a container to actually set up a mini-shift VM inside of it and nest from there. One also nice thing is that we can actually use an open stack or AV SVM that's external and run on Contra Environment setup there, run all of our tests in a clean environment, and then if for example something fails, which we can just leave it there for everybody to see. You can send a notification, there's the whole environment, check it all out. It's not just logs, it's everything there. Now for future developments, we'll pretty much be following all the best practices and new developments of infrastructure as code and of course also pipeline as code. We expect to have more advanced examples of HDSL and shared libraries as they develop further and of course as everything changes, this is all pretty much brand new stuff. And of course as Knative components and operators come into play, we also have to keep that in mind and support them properly with the Contra Environment setup. In conclusion, the Contra Environment setup is a tool that allows us to repeatedly and reliably set up your environments just the way we want it and many times, as many times as we want it pretty much anywhere we want it. Well, the thing about working with continuous infrastructure and continuous integration and delivery is that you tend to communicate and meet and work with a lot, like whole technological ecosystems, all different methodologies and everything and while it is really rewarding and fulfilling to actually be able to contribute and help the developers with their projects, it can also be really daunting task and very time-consuming. And I believe that a tool that actually allows you to automate part of that process and actually allows you to not worry that much about the infrastructure and everything like that is a tool that actually gives you freedom to use your skills where they are best utilized to use your creativity to help people. And I believe that Contra Environment setup is just that kind of tool. So that's pretty much it for me if anybody has any questions. With the Contra Environment setup, well, that's the nice part. You can just run the already preset environment and see how it works altogether and then see how it reflects on your needs and modify it from there and create your own project that includes or doesn't include or has your own custom container that are either based on it or you just go from your own. You have the full freedom with that. Thank you very much.