 Good morning, good morning everyone. Welcome back to the OpenShift TV coffee break every bi-weekly, every Wednesday, 10 a.m. says, my name is Nathalie Bidam, product marketing manager for OpenShift. And today I'm with my super co-host, co-presenter, Jafar and Tero. And we have two super guests today talking about a look back in the history of the OpenShift 3. So that looks very interesting. So Jafar, Tero, you wanna introduce yourself and then introduce our guest of today? Yeah, sure. So hi everyone. So thanks again for joining us. I'm Jafar Shrivi and I work as a tech marketing manager for OpenShift. And prior to that, I used to play for several years with the OpenShift 3 and even OpenShift 2 before. So this is going to bring a lot of memories from the past, I think. Tero, thanks. Good morning everyone. Tero Ahonen working in the OpenShift Specialist team. Same, we're working a lot with OpenShift 3 and I kind of hope that this day never comes that I have to talk about OpenShift 3 again. But at the time it was a good product and yeah, there are differences what it was and what the OpenShift is now. Have a nice show. Yeah, so please go Alessandro and Matteo. Do you want to introduce yourself? Hi everybody, this is Alessandro. I'm part of the EMEA Telco solution marketing team. So I work in pre-sales for the Italian Telco customers. Go ahead Matteo. Yes, thank you Alex. So first of all I would like to say thank you Natale for having me here. I'm a precise guy as Alessandro said, I'm based in Italy and I work with the cloud infrastructure product for Red Hat here in Rome. So please. Okay, so as Tero and Natale and Jaffar and the others actually mentioned it, we will talk about OpenShift 3. We talked about talking about again OpenShift 3 because sometimes it's good to look back and take also a look to what are the differences between this product that actually is five, six years old almost me and Matteo started working on it mostly on 2015, 2016 maybe for a customer here in Milan I brought that company. But looking back is also give you the chances to see what kind of features actually was there five, six years ago comparing also to the features that we have today. We will see, I prepared a small demo. We will have a small little environment running OpenShift 3 on my laptop. But as I said, looking back to version 3.0 or 3.1 it looks amazing because actually we had a lot of features in this early release that could be compared with the OpenShift 4. And as I said, looking back to five, six years ago we, me and Matteo were part of Red Hat also maybe also some of the participants of this call but we are in different role. We were part of the global professional services as cloud consultant. And I mainly, I recently joined it five, six years ago. I was also a cloud consultant. I think they interviewed me, no? Yeah, I think so, I think so. I did that mistake a long time ago. What? Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha Yeah, and actually, you know, as I said we work usually on customer using Red on the Linux box. There was also some starting project about OpenStack for example, but OpenShift was pretty new. There were some installation about version 2 but version 3 is really recent at that time. So we started working on, as I said on this broadband company, that should start the streaming services because, you know, instead of the fixed line, the cable they want to adopt this new services on the streaming side of the TV. And they need a very fast platform that could enable the developers to put in production some in a very fast, some product, some application in a very fast way. That's why they started looking at the container. They first started experimenting with Docker at that time with containers and then ended up to start testing also OpenShift version 3. And me and Matteo helped this customer installing this version. If I'm not wrong, it was the version 3.0 or 3.1 that was just released. Or again, we had this small upgrade from version 3.0 to 3.1. But it was fun because actually we had to look at the solution part. We had experience with Docker almost at the time, five, six years ago. The more smart CIS admin have some experience on containers but we had no clue on Kubernetes on this product. You know. You're right, you're right. At the time Kubernetes and OpenShift 3.0, version 3.0 was a breakthrough. From a data perspective, if you remember, we came with a product that was pretty similar in the aim, in the goal, which was OpenShift version 2.0. But that technology was based on another kind of component. So with this new version, we did a bit change. And if you remember, it was really challenging to understand all these new perspectives, how this technology brings all the things together in order to put services online. Since we were going back into the history, I remember OpenShift 2.0 was using a characterize concept. So that was its own implementation of a platform as a service, right? And then it comes Kubernetes. And then Matteo and Alessandro were one of the first one going into production with that. It's a very cool story. I look forward to hear from you. Raise your hands. Who have used OpenShift 2.0 in production? Hi there, Alex. Where to meet with you? Those are really vetted, I know for sure. Yeah, but I know developers that actually work in the engineering. I want to say hi to John Franco. I don't know if he's watching us. Wait, finish there. So OpenShift 2.0. Yeah, one of our biggest OpenShift customers in France actually started on OpenShift 2.0, and they kept it for a while, even after OpenShift 3.0 was released. So it was, yeah, although completely different product, I mean, it was delivering the value of this notion of pass and these things. So I'm happy that we changed a lot of things. That's good. Let me share my screen. So we come to the demo, actually. I don't want to give you any details on this web console, but first of all, start with the installation. As I said, I prepared on my laptop a small installation of OpenShift 3. Actually, maybe you lost my video when you enabled it. I prepared on my laptop the OpenShift 3.0 installation basically on two virtual machines. I created these two virtual machine with background, so it would be easy to replicate and to, let's say, re-enable or reschedule or restart the service once it's needed. But just to give you an example, for instance, I started with the two virtual machine running trial. Actually, I started also with a pretty recent REL version, REL 7.6, actually. And I have to say that the installation went really smooth. But apart from some errors that I'll tell you in a moment. But as I said, I just go through the documentation. You are seeing the documentation for version 3.1, but in 3.0, it was the same. And I followed the instruction reported, for example, registering the virtual machine to read that portal, attaching the right full ID, enabling the right channels, for example, and installing, of course, all the prerequisites for running the Ansible Playbook. Because I don't know if there is someone connected that only know OpenShift 4 on the audience, I mean. But in version 3, we have Ansible handling the installation part, because we have standard REL. And then we need something like an automation for installing all the packages. And Alessandro, if I remember well, we came with this Ansible installer because at the time, we acquired Ansible. And if I remember well, this was the first installer that we implemented in Ansible. Because all the prerequisites were implemented. In fact, before, we had a lot of experience with Puppet. I mean, also on myself, I had a lot of experience with Puppet, but I'm pretty new with Ansible. So it was really confusing for me debugging this kind of installer. Because when you hit an error, we work as a consultant. So the customer expected we solve the issue that we encountered. So it was really hard to get into the product. But at the end, we did it. And as I said, all the prerequisites were to actually have in place all the requirements for installing the product. And as you can see, at the end, in the final part, we also installed Docker. And of course, I remember pretty well that we had later this kind of warning. Because Docker keeps updating. And then if you use a previous version of the shift, you can hit, of course, some issues when you move forward only the Docker engine. For instance, looking at the changes that we performed with this product, we no longer use Docker as the engine, no? Is that right? Yeah, yeah. Correct, correct. I remember pretty well. Yeah, go ahead. Yeah, and I think it would be interesting at some point to tell the story about how things evolved. Like Docker at the time was a pretty good breakthrough. Wrapping the container's technology in a very user-friendly experience was great. But things started to really like everybody getting enthusiastic about it. And Docker starting to push releases almost weekly or bi-weekly or I don't remember at what pace. And then vendors who were relying on Docker wanted to have maybe some more enterprise-grade supportability. So yeah, if we can maybe talk about that and the way prior was created, then these things might be interesting. Yeah, as I said, looking back at that time and I switch to my terminal, actually, I'm now on the master, but maybe start on the initial part. I had, as I said, two virtual machine running. One is the master for OpenShift and the other is the worker. If I jump on the master and say, look at the services installer, I can see that there is a Docker demon running, as the documentation say. And it's right. Docker at that time, it was a really nice technology for it and also created a lot of, let's say, hype around the container's world. But I remember pretty well that one of the common issue that we had with the first customer is that maybe the Docker storage got full. Maybe the Docker storage fails. And then you have, from a site, the... When you say storage, Mattel cries. As soon as you say the storage, you start to cry. You're right. You're right, Mattel. At the time, there weren't things like garbage collecting for the images proving. So this person didn't include that feature. So the problem that Alessandro described happened. And that's why the product evolved. Right now, we think that these things are granted because we have it in our new version. But at the time, that wasn't so good. And let me say, also having two demons working in parallel each other, it could be really a mess. Because at the end, you will have, and at the time, there was the Atomic OpenShift node. The service is running for OpenShift on the worker nodes. And then you have the Docker demon running. And if the Docker demon failed for some reason, then the Atomic OpenShift node keeps running and keeps trying to contact the Docker demon. And it was really also difficult to debug because when we started, we all pretty know the Docker and the Docker demon how it works, but the Kubernetes stuff is pretty new. So we should also understand how they communicate, the services internally communicate and what are the issue. But for not giving, for not entering in too much details about Docker, I want to show you the Ansible Host five. Because as I said, the installation part at the end requires you to fill and edit an Ansible Host providing all the details that you need for, let's say, your installation. And as you see in my terminal, we define two categories, one for masters, one for masters and one for nodes. And then define some Ansible, let's say, variables that contains the SSH user to use for installing all the stuff and the software if Ansible should use Sudo. And again, the type of deployment because at the time we have, we also released it in open source, the OKD. At the time it was named Origin, the project Origin. And finally, some configuration about that. I mean, this Ansible installer was really, really flexible. And this was, in my opinion, one of the best features that we had at the time. However, during the life cycle of a cluster, this approach might lead in what can be called a configuration rate. And if I look at the comparison with what we are doing to now with version four, we completely engineered the installation part and this will allow us to have the, a better handling of the whole life cycle cluster. So maybe what we are seeing here, what I was showing you how and what we were able to do with version three was really extendable and flexible in terms of how we can deploy the architecture. But still, the problem with the life cycle management of the cluster is something that we, I think we are addressing better now. Sorry, I thought I was having rough news about you. I think we're done with noise. Yes. I hear you as well. Okay, actually, can we talk about the inventory file that mentioned that I remember when there was always a new release, once we got an inventory file that works with that release, that was like a ground jewel. We never lose it. We saved it and send it to everyone so that we have a work in inventory file. That was always nice because there was, there were always something that changed in the inventory file in the different versions. And there was a lot of stuff that was not documented, but there was variables for those. So it was its own search and research and development to go through the code and check that can I modify this value or is there some variable for this? So it was good, but it had its problems. Yeah, right in the inventory file became a skill set in its own. Like, yeah, you had to become a master of inventory and then your colleagues ask you as the master guru. I actually, good statement about that is that we had the Opensift administration training. How you returned the final test was the inventory file. Yeah, exactly. So you returned the inventory file and if that's correct, then you pass the course. Exactly. And hopefully now it's all automated and the operators do that for us, right? Yeah, I think that is a major change. It is a way forward and a real enhancement on how to handle the complexity that with the growth of the technology is increasing. Nowadays, the Openshift handles many, many things in front of what I was able to do in version three, especially in version 3.0. So this complexity has to be handled and the operators and the way we install this today is a great enhancement on that. Yeah, also because at that time, can you hear me, right? Yes. Yeah, yeah. Sometimes we hear some noise from the space, like, thank you, the whole space. I don't know. Oh, remember that I am running two virtual machines on my laptop and they are pretty huge. So maybe they are the fun of my laptop. But anyway, just for giving you an example, I, at that time, as I said, we started experimenting with this product, with this customer here in Italy. And we started also editing the, for example, the configuration file directly on the Openshift configuration directory. For example, on the master node, with that there is this config file called the master-config.yaml that contains all the stuff needed to the Openshift master services to start. And again, when you configure something at the time in Ansible, I mean in the Ansible inventory, then it reflected in this file. But the fun part is, and here's the fun part. When you edit something, because, for example, I added this metrics public URL that was the first version of ocular metric service. And I added it manually. Then if you forgot to update your Ansible file, Ansible just overwrite the configuration file for you. So you lose all the edit you did manually. You know? And at that time, it was pretty frequent to work directly on the machines with the configuration file, also for troubleshooting, and as I said, for testing. On the other hand, another issue that I hit, and maybe here I can show you a diff between the Ansible hosts that we, that I prepared for the Openshift. Let me go back on my, okay, on my directory. And if I can show you the difference between the Ansible hosts for version 3.1 and the Ansible hosts for Openshift 3.0, you will see soon that there is a change in the variables. And at that time, and also this time, I didn't read the release notes. I didn't read the change log between the two versions. So I ended up with the, I started working with Openshift 3.0 and then realized that some of the shiny feature of Openshift, like the terminal, the logs, the metrics were not present at that time in 3.0. So I started working on 3.1. And the first thing that I thought is to take the Ansible host file and bring to my Openshift 3.1 installation and start the installation. Unfortunately, I didn't read the change log. And so I didn't realize that we changed the value for this deployment type. And so the installation keeps failing and keeps failing because as you can see, we first had these two variables, product type and deployment type in version 3.0 and then changed it to 3.1 in just a deployment type with a joint of Openshift and enterprise. And this of course, it was actually a lot of hours of work for me to understand what is the issue. Because as I said, I didn't read the instruction. This is my fault, I didn't read the documentation. But it's pretty common also with this Ansible playbook to also forget about some stuff, forget about to read or miss some pieces in the documentation and then let the mess happen. Let me have a joke here, Alessandro. This is like is when hour of the bug can save you five times, five minutes of reading documentation. This is happened often when you do consulting because you are always there trying to fix things up and while the solution is quite often at the condition. Yeah, that's why, you know, Tero, if you recall, that's why Tero started and we as a Tiger team, we started the STC project just to collect all the hints like Alessandro said, you know, this variable, you have to recall that variable, you have to go to that. So we build up a kind of validate or prep script, Ansible script to prepare the right Ansible file. So actually I can share in the chat because it is still working for OpenSheet 4, but if you go back into the history, that works also for OpenSheet 3. So if you plan to install OpenSheet 3 for some reason. Yeah, Natale, Natale, but you and Tero, you are the lucky guys because actually you go and install on an empty environment. At that time, Mateo and me were working on a full environment with all the workloads of the customers. So it's a pretty different scenario. Yeah, but back to the Ansible part that it is powerful and if there was a bug you could just change the Ansible files, the playbooks in the host, or if you need to debug, you could run Ansible to modify the host or do a Docker pool in all the hosts to test that Docker is working. So it was really powerful, but still that Alessandro said that if you change something in the master, you need to change same in the Ansible host. That is totally same now, but in different layer. Now you do GitOps. So you don't go there to change your deployments. You have to go to the kit, the source of truth and modify it there. So it's just same thing, but just a couple of layers up. And moreover, the underlying infrastructure makes sure that what you define, it is applied in a continuous way with GitOps, which is not something that was doable at the time. So even here, we can see how things went off. Yeah, and I remember also one of the big things that we had was like, if I wanted to install something post-installation, how can I trigger just this and not rerun like the whole Ansible playbook because I forgot to put in some variable for the matrix or whatever. And I know that we started dissecting the installation playbooks into more granular things. It's something that happened a bit later. So you had playbooks that you could run just to configure logging or playbook to configure just the matrix, et cetera. But still you had to like run the playbook and you had to run them in a specific order. And now with operators, you don't care because the operators are going to continuously look for any changes you make in the Git repo as Tero said, and they happen in the order they happen. And there's no like, do this playbook. Oh no, I forgot to run this playbook before. And so I think it's much easier on everyone today. Absolutely, absolutely. At the time, the idempotency of the playbook or of the interface playbook was a thing. And even I think from an engineering perspective, from the work that our engineering team has done, it was really hard to maintain the idempotency for all these kinds of automations. Now as you say it with operators, it is much easier. With over the course with KoroS, which is designed to be idempotent as well, immutable as well. Yeah, exactly. And even if you think about the way you install the components, now you just go click, click, and then you choose things from the market days. Even the core open chip operators, whereas before, you had to like go to the playbooks and try to understand why it's not working. Now it's, I think, much streamlined experience. Exactly. And with the operator caps also, the logic that enabled you to update a complex piece of software, which is made by many components, in the past you needed to, if you did an update for elastic search, this is an example, you maybe changed the backend system for that thing that would trigger some stuff that has to be performed manually. Now everything is in the operator and it's handled by that. So even here. Yep. So it's good times to be in now. It was good by then, but you know, the standard is good. If you look at the time, if you look at the time, it was amazing. It was really great at the time, but as technology evolves, you know, the standard is... You've taken new challenges, that these new challenges can be tackled by a new approach. And with GitOps, I think, and with the operators and the CoroS, we made a big step forward. Yeah, I agree. And also Alessandro mentioned it. So with CoroS, the risk of compromising the operating system really reduced it. Since we are sharing story here, I want to share you one of the story, but I was installing Obershift 3. And so what happened is, I think the customer looked at the ETC result comp. So that file was looked at because the security policy was to lock that file. But the installation wasn't working after two hours of the back. We understood that was the problem. And also Ansible sometimes was failing because the operating system was modified. With CoroS, this risk of changes, not expected changes are reduced, totally eliminated. So a big advantage is not only the operator approach, but also the operating system. So Rail CoroS is a big help on reducing the surface of attack, but also the risk of changes are ever on the stage. I completely agree. And if you remember, there was atomic cost at that time that's trying to do that job. And also on the operator side, we didn't have the operator, but we tried with Ansible Playbook bundle, for example. There was this Ansible containers running some playbook inside the container inside the OpenShift. So there was a lot of stuff that of course evolved in what we are seeing now in OpenShift 4. And operators are supported actually after 3.9. So you can use operators. But adding to the Nautales battle story is that at the time, there were some companies that were actually using automation. And it was fun to try to install OpenShift and then you had a short stack or puppet have a race condition because they tried to change the settings back that Ansible changed. So it was constant change and it just never worked. And then the customer mentions that, yeah, yeah, we automatically disable IPv4 forwarding or modify the HCD host. So everything was like, okay, and now you're telling me, I've been battling five days to get it running. I agree with this thing exactly. And speaking again about one of the advantages of CoroS, if you think again at what engineering, our engineering team can do now with that technology is about bringing and putting in place an entire continuous integration flow to test each new releases and each component. While at the time it wasn't possible to, let me say in that way, to try every single specific configuration that a customer would put in place. Right now, one of the benefits is make sure that what come out from our engineering, from our BU, from our practice technology structure is something that is really end to end passive. Yeah, and Alistair, do you have a other demo to show? I see an OpenShift 3D app here. Yes. So, so what's the beautiful user interface? Yeah, because just before giving you the shiny interface actually, I want to show you that I'm on the master nodes and these are the two nodes. One is the master, of course, and the second is the worker nodes. So, can you run a version check? What is the Kubernetes version underneath? Yeah, we have OpenShift version 3.1.1 and Kubernetes version 1.1 at the time. But it was fun. And I tried to actually search from release nodes and Kubernetes version is not mentioned because it wasn't relevant at the time. It just was there and it was working. So, nobody gave up on Kubernetes version. And you're anticipating a thing that I want to show because I managed to find the documentation running through WebArchive and pointing to the 2016, you know? So, this was the getting started page of Kubernetes and this was the documentation at the time. So, it's very simple and basic. Here, for example, I also managed to find the definition that we had at the time. There is a pod, there is service endpoint node, event, limit range, secret. You can see there is a miss here. We are missing something. For example, I cannot find and I cannot see config maps, for example. There was not config maps. Yeah, the two config maps came later. A role-binding. Yeah. Yeah, yeah. On the Kubernetes, we don't have the deployment yet, but on the OpenShift part, yes, we have deployment, for example. One thing that with OpenShift 3.0 we had, already had at the time, was the ingress automation. What is called right now ingress in Kubernetes at the time was already in place in OpenShift with the concept of booted. So, and also the deployment config, which are now in some way represented by the deployment API in Kubernetes, there weren't at the time on Kubernetes. There was also one thing, it was multi-tenant from the day one. Multi-tenant. Yeah, there was role-based access control. Was there in the OpenShift from day one? Yeah, and if we think we are speaking about the history of OpenShift, but if we speak about the history of Kubernetes also, I think just those two things are one of the main contributions I would say that Red Hat made at the time from OpenShift back to upstream Kubernetes, because we were committed not only to make a great product, but also to contribute to making the upstream projects better. And everybody now using deployments, but indeed, as you said, they come from the deployment config from OpenShift. And now everybody using pod security policy PSPs. And I think it's also something that came from SCCs and such things. So, yeah, I think. Yeah, role-based access control is a good example, since it was in the OpenShift, it wasn't in the Kubernetes. And then Red Hat contributed a lot to the role-based access control in Kubernetes. And at the time, I can't remember the versions. They were role-based access control in Kubernetes and different role-based access control in OpenShift. During a couple of releases, OpenShift switched the upstream Kubernetes role-based access control. So implemented in OpenShift contributed to the upstream and then started using the upstream version. This is a good example of how OpenShift has made Kubernetes also better. But also one other cool things is that OpenShift was born with developers in mind. Because apart from the, you know, the multi-tenancy, and as you can see, I just logged in with my user, Alex, defined it in the HT, password, the file. The first thing that it lets you do is to create a project that was something, like an extension of the namespace. And if I create my SQL project, for example, and I hit create, it present me the classical interface where you have this full list of templates where you can start actually. You have not to, let's say, build your Docker container, push somewhere, then instruct Kubernetes, downloading, pulling down the containers, and then finally run. You have a full list of templates at that time, and we are still in version 3.1, just for saying, you know. And so we had the, of course, we started also our adventure with Jenkins and the Jenkins integration. At that time, there is no fancy interface integrated in OpenShift. You can deploy it, for example, a Jenkins, and there is a bunch of pre-default stuff. You can hit create and let OpenShift spawn a new container for you, for example. So it was really straightforward, easy to consume containers. Also to consume containers for someone that didn't know so well what was a container. Because at that time, if we talk again of this first customer that we had in Milan, adopting OpenShift, me, Matteo, and other colleagues, Federico, for example, had to work a lot to, like DevOps, you know. We have to listen to the complaints coming from the developers, listening to the complaints coming from the developers. So I think that one point that is, I'm trying to touch on one important point here. I mean, since day zero, we had a workflow that our developers to put forward using this technology. Among the things that we bring in this solution, the concept of building, allowing the OpenShift in building your own application, starting just with the Git repository is something that at the time was absolutely amazing. Yeah, I totally agree that you didn't require anything. You didn't actually know that you're on containers or you're on Kubernetes. You just, it was the way that people tried to do now that what is the most important tool for a developer? It's version control, it's Git. It was already then the way to work with OpenShift. Yeah, and as you can see, just clicking on some web interface, you get your containers up and running and there is also a route created for you. So starting from this concept also of template, you have all the stuff needed to put your containers up and running and also to access it from the outside of the cluster. And so again, just clicking the route, it takes me to the Jenkins interface, for example. So it's really, really powerful for me, this kind of interface and this kind of also user experience at the time. And if we look at the left sidebar, we had already the builds concept. So you can create your build and define, for example, so a Docker file to build on your environment. We have the concept of, of course, of pods. We can explore the running pods and starting from version 3.1. This is why I didn't start it with version 3.0, at least for showing something on the web interface. We have a very nice recap of the running container of the running pod. The IP address, this was really fun at the time to explain to the customer and to the users of OpenShift, the overlay network, the fact that at that time we had also Docker container exposing an IP locally to the nodes. Then we had this overlay network on top of the OpenShift cluster and then we had this ingress controller flowing the traffic inside your cluster through the pod. And I think that Mattel could explain some fun fact because handling this kind of architecture in an old view, in an old architecture type of the customer. Well, we usually had the web interface, the front end that you have the back end and then you have the DB, the classical tree, free tire layers, including this OpenShift that actually could be a front end but could also expose a back end or could also expose a database. It could be really difficult also in terms of networking. You know, so what did you say, Mattel, at the time? Mattel, he's referring to this story because he's still blaming me for what I did. So basically, we were working at that customer we were referring to and they needed to reach a back end database from a pod that was placed in the OpenShift platform that we put in place for them. And the fact is that at the time there weren't the concept of agress controller. So you weren't able to decide within Kubernetes how to handle the traffic outside Kubernetes. So an outside ownership, of course. At the time, the only thing that you can do was... A podless service, I think. Not, I'm not sure. At least I didn't follow that path. Yeah, because you could create a service but that doesn't have a pod selector attached and just actually talks to the external service. Yeah, you're right. That was possible. But what happens is that you need to make sure that the pod that's loaded, that runs your container has a routing table able to reach that back end. So basically what happens is that your pod consumes the routing table of the underlying node. So we did a trick, we did an hack with traffic control and source-based routing with TP tables. Again, but the customer, I think the customer still didn't understand how that was because we put the hands in the part of the operating system that was quite tricky actually. So I think it's referring to that because it burned me for what we needed because it worked but no one there was able to understand how and why at the time. It was magic, it was really a magic trick. It's even more magic nowadays since you can use the anim-state operator to do the exact same with the YAML file. You create the custom resource and operator does actually can modify the routing. So you can do the same, but it's even more magic. You understand even less what's happened. Yeah, and thinking about networking, I think it's also one of the areas where many evolutions happened over time. So as you said, first thing is to deploy everything on the platform, but then the customer started to think about, okay, so how can I replicate my old architecture of front-end, middle, where, and back-end and have them all segregated? So we started to say, oh, you can put your front-end in a namespace and you can put your back-end in a separate namespace and then you can only create connections between front-end and back-end. But the customers started asking, oh, but I want to control in more granular way the flows. So I only want traffic that goes to TCP, whatever ports to be allowed and everything else tonight. And things like network policies started to happen in both Kubernetes upstream and OpenShift networking. So it was, I think this conversation was a bit tricky in the beginning because everybody wanted to replicate what they knew and they didn't want to trust the SDL. They wanted to control the SDL. So I think it was a funny way of seeing how, even like network admins started to rely more on software-defined networking, but at the time it was not an easy conversation because... Also the SDN, sorry, also the SDN adapted to the model. If we think about MULTUS today, it allows working on multiple interface. That was the first, as you mentioned, and everyone wanted to work in that model. So at the third point, the SDN also adapted to this model. So with MULTUS, you can have multiple interface mapped to your port. So it's a mixed way. So people start to understand the SDN, but also the SDN add up to certain use cases. Exactly. We also added, for example, the concept of the egress router. At the time we had no way to, other than placing, for example, the static routes on the various hosts. And we ended up creating another Ansible Playbook that we usually run when we deploy a new node in the cluster. Just for updating all the stuff, the recurring update stuff. At the time there was also, we started also working on satellite six, and we also tried to place some of these rules inside satellite through Puppet, for example. So there was also a mix of stuff, because again, the only way is to edit and to work with the underlying operating system. And for managing the underlying operating system, you had a lot of tools. And you could edit it without any limitation. This was, of course, this was pretty nice and advanced for, of course, advanced user, but for general consumption, let's say, the, as we saw, as we said previously, the CoreS introduction and the operator are more an easier method to do it. Yeah, so just, yeah. I don't want to do this, but Jaffa said that, like, Jaffa mentioned like, what legacy infrared firewalls, IP-based firewalling between services. Now, we have worked that in the Kubernetes environment. Now, the next phase is what telcos are asking. They need to have same features that they run on bare metal on telco environments. They need to have an SRIOV interfaces, CPU, Numa, pairing these. And we are all, again, evolving that into the Kubernetes. And it is just like second, and maybe there will be third and fourth and fifth stage of matching the Kubernetes, what customers used to have in their legacy environment. Yeah, and so this reminds me of something else regarding the traffic, which is, I remember that the router, like the HAProxy router that we have included, which didn't exist in Kubernetes, was already a major feature because you basically, it just worked. You had your white cards. Okay, not everybody was happy with using a white card, but they, as soon as they understand that, oh, okay, I don't have to create 100 entries for every new plug or new service that I deploy, it just works. They say, okay, I see the value. But I- I know someone who's using satellite, you know, Mateo satellite to create the entry in the DNS server window, so we're not someone- You're right, you're right, you're right. And I would say, I want to tell to Jaffa that that card from OpenSheet version two, the concept of white card is something, and the way we use the HAProxy is something that we learn in that technology. Yeah, okay. Yeah, but so what was funny is, so this was already like a big step, but when you start to speak with some customers who like in the banking industry where they have some strict regulations about traffic, et cetera, they say, okay, it's nice to have your router, but I don't want to use one single router for all my applications, because again, I have applications that need to be in network A, and I have applications that need to be segregated in different nodes in network B. How do you handle that? And if you remember like the router shouting, that's a feature that came a bit later on, and we said, oh, okay. So there's a use case that maybe we can address, and we started deploying dedicated routers to dedicated networks. And so yeah, I think that was also one of the good things with being enterprise ready is you take those requirements from customers and then you make your product evolve, and then it also affects the upstream way of thinking about ingress. Because it... Yeah, that is true, that is true. And as I just showed us, this feature that you are mentioning together with the ability to build an application starting from a iterable history is what makes Able an enterprise with not deep knowledge about an emerging technology to use it from the day zero. Yeah, exactly. And one good thing to add to the ingress, was already in Opensiv 2, you had TLS termination support. So you could, because as you see the API spec, there was no secrets, no config maps. So it was really hard to actually add a certificate to their workload, you actually had to build it into the container, which is not nice. But with Opensiv 2, you could do the TLS termination on the ingress, and you have single point to handle that was really cool. And this was the time when there was let's encrypt, it wasn't that popular. So people just didn't run its DTPS. Yeah, and so, Tero, very, very interesting thing. So do you remember how we used to handle like rotation of certificates? Yeah, Ansible Playbook. Ansible Playbook, exactly. And one of the things like, you go to your cluster and nothing works. Oh, damn, that's part of the year where my certificate have expired. So we started then doing things like Ansible Playbook that tell you when your certificate is going to expire. But now we have the operator where you just change your certificate and the operator redeploys it instantly and reconfigures the router and maybe even other routes. So I remember it was something that was pretty painful because you had to pay attention very closely to it and you had to create your own scripts to handle that. And we listened to our customers' pain and now we have the operator that does that and that can handle the rotation. So I believe, yeah, doing TLS and certificates was really a good feature at the time. But I think now with operators, I prefer the way folks do it. Ansible, what were you showing? What was that? Yeah, I'm just having fun with the interface. Actually, as I said, at that time, we had all these details in this web interface. Otherwise, you have to grab them from the terminal in another way. And I just spotted the MS SQL container, for example. At that time, we also had the persistent volumes and the persistent volume claim inside the concept of OpenShift and the underlined Kubernetes. And as I said, I chose to start with the version 3.1 because also had the hipster service running that show you the real-time consumption about the CPU and RAM and the very nice and shiny graphs on the web interface, as well as you can access to the logs. So the live logs from the containers. And then finally to a very nice terminal where actually you can look into the containers are running commands. It actually simulates the OC LSH for connecting to the, for attaching a shell to the same namespace of the underlying containers. So it was really a fun also, but also complex as I said, showing this stuff to the customer. But at least the web interface at that time was really a really improvement and for his next consumption. And we cannot. Yeah, sorry. Speaking of the matrix, I think even the stack changed, right? Cause at the time, we didn't hear a lot about things like Prometheus and we decided to go with Hocular and Hipster at the time. But that stack took more memory than the actual workload. Yeah, and it was a bit tricky if you wanted to get custom metrics and or build custom graphs and these things. So yeah, it was there for a long time. And at some point we decided to switch to Prometheus and Grafana based metrics and monitoring stack, which I believe was also a good shift because if you think about the way Red Hat does things is we had OpenShift 2, there was a breakthrough technology but nobody was using it, which was Kubernetes. And they said, oh, I think it's promising. Let's switch. Even if we have to rewrite everything, let's use that and let's use Docker. And then we had our own metrics components, we were contributing to Hocular and such things. But at some point we saw that there was promise in using Prometheus and we decided to embrace Prometheus and contribute to it probably because also of the acquisition of CoreOS, we had who had the great experience with Prometheus already and who were a major contributor to the project. But what I like about OpenShift history is we don't say this is how we build it and we're gonna stick with this. If there's something better because customers say it's better or the community says it's better, OpenShift evolved to change its stacks and come up with something more adapted to the use cases. Yeah. And look, we can use this. This is a very nice sentence to close this episode of today. We're going into the end. But also, Alessandro, you are showing the topology, the first approach to the topology, right? Yeah, this seems useless at the time, but actually if you click on the various item, it shows the various details. So it was useful also for explaining to some unexperienced developers what are the basic concept of OpenShift and Kubernetes at the time. But finally, before closing, I want to also show you the WebArchive page for the OpenShift Origin open source project at the time. And the fact that we also distributed and only one VM, just mostly like the Minikube or a Minishift VM or again, the CRC for the OpenShift Azure 4. Yeah, so this was what we would provide before OKD. The gesture of OKD was origin and was the option. So thanks for showing also this. But look at the left side. And Natalie, yeah, just on the same line, can you go back to that VM stuff? There's something even more interesting, like run origin in a container. Ah, yeah. That was like a big breakthrough. And I like the OC cluster app command. Yeah. I'd like to be able to run OpenShift 4 on one container, but I don't know if it's open container. It must be big container. Really big. Okay, folks, we have to close. And yeah, we have seen today with Alessandro and Matteo Japantero, we've said a history of OpenShift from version two with cartridge gears to version three with a Kubernetes and Ansible for distillation. Now we are going to OpenShift 4 with a real CoreOS and operators. So, and as Jafar was saying, we are improving the software from the community input, from the customer input. So, what's next? We don't know, but we would like to hear your feedbacks. Let's close this session. Alessandro, you can stop sharing your screen. We have some little reminder to do. Today at OpenShift TV, you have level up hour. The session is certified container pros. Then ask an admin. There is another session as flat and vSphere problem detector. And also we have our current schedule for OpenShift TV. We come back next in two weeks. So, next one is two June, 10 a.m. Together with Jafar and Tero Ciamac, we will talk about Tecton in action. So, Tecton live demos about pipelines. This is our next show. So, looking forward to our next show, I would really like to thank Alessandro and Matteo for joining us today. It was their idea. It was great. And I already tweeted the screenshot. It was very cool. So, thank you folks for having joined us. And yeah, talk to you soon on OpenShift TV. Ciao. Thank you. Ciao. Thank you. Ciao.