 the main stage, Clayton Coleman does a keynote for KubeCon. We have tons of breakout sessions and yes, Red Hat is hiring too. So if you love contributing to code, geeking out with people in the industry, Red Hat's a great place to be. So I definitely will bring some Star Wars things here. So thank you again for joining. I'm gonna get things rolling. As I said, my name's Stu and thank you again. So let me stop that here. For our session, great. So Chris, I'm gonna load up the video here and we are ready to dive in. All right, video number one. We're looking for, we simply didn't have the right. Three, two. Okay, so thanks anyone for making the time for joining this presentation today. During this session, I'm going to introduce you one of the application we have implemented for one of our customers. The application makes use of text analytics and artificial intelligence to reduce the risk of GDPR breaches. But before diving into that, let me introduce myself. My name is Filippo Sassi. I am a senior software engineer. I've been working in the industry for quite a few years by now in companies like IBM, Concentrics and obviously Virtual One, which I joined in 2014. In my career, I covered a number of different roles. Dotnet, web developers, scrum master, tech lead. Since 2019, when I joined the Virtual One Innovation Labs, where I am now one of the leaders. Virtual One is an IT consultancy firm driving customer success through over 20 years of market leadership and innovation in IT services. Virtual One believes in modernizing, innovating and accelerating our customers business transformation. Our greatest strength is in balance in our efforts to keep growing in all the three sides of our strategic triangle. The first side is customer success. So making a real difference through long term outcome focused relationships. Then empower people, selecting, empowering and trusting people who are wired to deliver customer success. And the third side is strong organization. So a high performing financially strong organization of the highest integrity. We believe that this is what makes Virtual One different and more importantly, our customers agree. On this slide, some stats about Virtual One. The interesting thing I suppose is the quick growing rate on some of the figures. And I'm not gonna lie to you to create this deck. I use some of the slides from a preview presentation we run in October, 2020. This slide at the time show just over 1300 employees in just two quarters, we're already reaching 1.5K. I think that more than any other numbers this demonstrates how Virtual One is growing while committing to our core values. Daphne is the Irish government department of agriculture, food and the marine. Daphne vision is to be an innovative and sustainable agri-food sector operating to the highest standards. Daphne is one of the oldest Virtual One customers and Virtual One provides many teams dealing with the different Daphne schemes, applications and more. One of these teams is the BPS team, BPS stands for a basic payment scheme. BPS is the largest payment scheme run by the department and specifically the scheme is responsible for issuing grant funding to the value of 1.2 billion of euros to 120,000 farmers in lines with the European Union regulation. The team handles application and payments of farmer grants through the BPS application which can be accessed through modern digital channels which makes the customer journey easier with fewer administrative overhead. In the last couple of years, Daphne has invested heavily in the OpenShift Container platform. This choice was primarily justified by one of the key strategic aims for the department to provide a capability for fast flexible application deployment and at the same time to be responsive to changing and emerging needs over time. All of this while focusing on small products that can be designed quickly, iterated and released often. In particular, the OpenShift Container platform was a suitable choice for the project I'm sure going to introduce you because there was real concern about using public cloud services to scan and analyze documents which might contain a personally sensitive information. The solution reaffirmed the department belief that the investment in the OpenShift platform would provide long term strategic gains. In line with the public service ICT strategy, Daphne is focused on digital transformation including both front end and back office transformation to deliver services for citizens, businesses and the government. From May 2018, the General Data Protection GDPR regulation came into effect requiring businesses to protect the personal data and privacy of the European citizens for any transaction that occurs within the European member states. In line with these requirements, with this regulation, one of Daphne's priority for transformation was to protect the personal data for not only the Daphne customers but also for the customers of the public service as a whole. And in particular, we consider the following use case. To receive grant payments, the farmers must upload various documentation through the department website. These documents often contain personally sensitive information which might not be indicated by the user. There is a checkbox on the form and that indicates that the document contains PSIs. If ticked, just certain levels of the staff can access the document. However, very often, the end users don't indicate the option correctly and this leads to a situation whereby department staff reads a documentation to which they should not have access. Another challenge, of course, is when agents acting on behalf of their users sometimes upload their own documentation. These lead to approximately 60 major GDPR breaches every year. So whatever the source of the breach, both scenarios could lead to privacy violation and GDPR breaches due to the staff accessing the document without sufficient clearance. These breaches require significant effort to address and they are obviously taken very seriously by DAFAM. The department wanted to understand how technology could be applied to assist and to answer this question, DAFAM, version one on site team contacted the version one innovation labs. The labs are a very limited service that version one provides to its customers to explore disruptive technologies. A couple of points to note here. So it's a version one customers. That means that whatever we do, we do it for the clients which are already within the version one customers base. And for them, we are a value added service. So we are free of charge. That doesn't mean that we are free of cost. Indeed, we're expecting to use their data. We're expecting to use their resources. These will have particularly an impact on cost if we decided to go cloud. We're expecting to interview their employees to better list their requirements. We're expecting them to test the POV. And finally, we're expecting at least one person from the customer side to play the role of the product owner and to actively collaborate with us almost on a day-to-day basis to implement a proof of value. The proof of value is the same thing of a proof of concept, basically a fully working prototype. We just applied the semantic switch to highlight that what we do actually bring values into the customer's businesses. So far, we have implemented at least one POV in all the technological areas shown on the slide, the only exception being the IoT. Some of those POVs were quite cool. I remember one of the first one I worked at when I joined the lab was proof of value for a virtual reality application using Oculus Kit headset. For the same customer, we immediately implemented another POV this time using augmented reality on an Android tablet just to show them the different experiences. Both the POVs were very well-accepted received by the customers, but we understood that to push this forward, to move this into production and to provide the client with the wow factor they were looking for, we simply didn't have the right capabilities within the company. That's because these technologies are quite neat and they require very advanced graphical skills, especially 3D graphical skills which are almost those required in the gaming industry. So from 2020, we decided instead to focus on those technological domains where A, we got plenty of expertise within the company and B, where we think that our customers would have benefited the most and those domains are machine learning, artificial intelligence and robotic process automation. The innovation engagement process with Daphne was exactly the same standard approach that any version one customer faces when engaging with the labs. The process is the following. It's always start from ideation. So we are constantly talking with our customers to understand if they are facing business problems which are not solvable by standard day-to-day technology. When we identify one of those problems we start researching. So we look for an academic or industrial sources. We run brainstorming and design thinking sessions until we found the technology that could help solving the problem at hand. And when we identify such a technology we start experimenting with it. When we're happy enough when we think we have found a potential solution we formalize it into an innovation canvas. The canvas acts like a contract between us and the customer. And the document contains information such as the problem we are trying to solve, the proposed solution, the people who will make the development team, a timeline and the metrics that will be used at the end of the project to determine its success. When all of this is agreed and the canvas is signed we start with the actual implementation. We are following an agile, iterative and incremental methodology called SCRUM. We take up to six bi-weekly sprints to implement the POV. We won't do six for the sake of it. If at the end of a sprint during the sprint review the customer agrees that we have solved the problem under investigation we consider we have proven the value of the technology. We got in touch with the rest of the version one delivery teams to define our roadmap for moving the POV live. So this is exactly the same process of staff and follow when engaging with us on this particular use case and the outcome of the whole process is smart text. So using best of breed open source technology smart text provide text analytics capabilities to extract meaningful insights from unstructured data. So documents, images, PDFs, et cetera. These insights are the features that are later used for artificial intelligence modeling to ultimately classify if the document contained or not personal sensitive information. Obviously this is just one of the many possible applications. Smart text could be used by many other scenarios and we will shortly see some example. But for now let me just dive a little bit more into the components of the solution. The first one is the OCR. OCR stands for Optical Character Recognition and this component extract the textual content from the instructor documents. This textual content is then utilized to derive useful metadata attributes from the other smart text components which are sentiment analysis, topic modeling, semantic or search, regular expression, extraction, and name entity recognition. Each of these components is exposed as a separate API ensuring those coupling and easy recombination. The APIs use cutting edge open source libraries with appropriate customization for these and other use cases. As example of customization, we are currently retraining the open source machine learning model with specific set of documents for making the models domain specific. The smart text solution in DAFAM is deployed on-prem but all the components are deployed as containers to ensure portability of deployment across cloud too. From a deployment perspective, we said that already DAFAM made significant investment in an on-prem OpenShift container platform. As a consequence, we wanted that smart text utilize the power of the platform to demonstrate its value. And that came out to be a great source as the OpenShift platform helped us solving some of the issues that we could have faced otherwise. For instance, the smart text solution was designed to take advantage of the Python machine learning libraries but this architecture was not supported in the DAFAM infrastructure. The OpenShift platform allowed for secure deployment and build of readout published containers but would have been impossible otherwise given the available budget and time. Likewise, building our tests and production environment for the project would have normally been another large cost but this was easily overcome with OpenShift and made streams. The solution is currently live, actively mitigating GDPR risk for farmers and agents flagging potential errors during the documents upload. This has enabled the department to switch from a reactive to a proactive approach of identifying data breaches and isolating them and preventing them from occurring. This obviously reduces the administrative overhead and the lost business hours of the employees having to resolve any potential breaches and obviously this also reduced reputational damage to DAFAM. The project demonstrated that the department led the way in using cutting edge open source technology such as OpenShift and natural language processing libraries. For what concern in the labs, we were able to demonstrate our credibility in the areas of text analytics, machine learning and artificial intelligence. The smart solution is now a key piece of our smart action suite that we are developing. We will shortly talk about the smart action suite here. I just would like to say that since we have implemented the solution we are having many conversations with our customers and smart tech storage generated really interested. We immediately understood that creating an ability to extract valuable insights and metadata from a structured document being them forms, handwritten letters, images of document, whatever would be hugely valuable behind the initial use case. For instance, for one of our customers in the UK we have been recently implementing a document summarization tool and the goal of the tool is to provide key pieces of information to the end users from a set of documents without the user having to read any of those documents. At the core of this solution there is smart text. We have also recently demonstrated it to many other clients both in Ireland and in the UK. Or in all, we think that this project is an excellent demonstration of how open source technology could be utilized and augmented to develop solution which are comparable to the major cloud vendors. Indeed, we commissioned our report to compare smart tech solution with similar technologies from Azure and AWS. And this report showed that the performance from those smart techs are very much comparable to those of Microsoft computer vision and cognitive services on one side and AWS text strut and comprend on the other. Within DAF and the smart tech solution was the first application deployed on the OpenShift container platform and as such it ironed out all the user technical challenges with deploying onto a new platform. I was not directly involved in the original development so I won't spend too much time here on the technical challenges and the subsequent learnings. However, talking with one of the main developers I found particularly interesting that one of the weakest point of the original implementation was the central role of the orchestrator components in the original architecture. Because of the orchestrator that architecture was highly coupled working through a set of well-defined steps to be executed together. Being so the orchestrator needed to know everything about anything else making it the single point of failure that is the orchestrator goes down everything that goes down to. So we look at more and more architectural approaches. At the end we went for a reactive base architecture which make the single components responsive to relevant changes in the data. The benefit of this architecture are many most responsiveness, resilience and elasticity. I previously mentioned the smart action suite so before concluding this presentation just please allow me to quickly introduce it to you. Before we look at the standard innovation journey our customers are facing when engaging with the innovation labs. The journey goes from ideation to the successful implementation of the APOV. However over time we noticed that many of our customers were facing similar problems. So instead of reinventing the wheel all the time we have decided to start productizing over existing POVs and build what we call the smart action suite. This is a suite of components which could be used either in isolation or like Lego bricks could be combined together in different numbers in order to build many solutions which could apply to different use cases and scenarios. Some of the components like smart text and smart data capture have already been developed. The other will be implemented in the next future. The overall idea here is to provide our clients with a hyper automation set of ops which empower their employees allowing them to take a better and more efficient decision in a shorter time. In a nutshell the key components are shown on the slide. We already talked about smart text I will just introduce another couple of them. One is smart FAQ which is our smart bot providing organization with always on 24 seven service answering FAQs to customer queries. Smart data capture and app to support enterprise data capture requirements. Smart search, a solution providing intelligent documents search where a user can search for queries in conversational language and the right tanks the right reference from the documents will be returned. Smart automation, best of breed automation tools to develop hyper automation. So with a combination of RPA and AI and finally smart process advisor which is designed to guide staff through organizational processes advising them each step of the way. And that was all I wanted to share with you today. I hope you find it interesting. Thank you very much for your attention. If you have any questions you can enter it in the chat below. Awesome, for repo that was excellent. I just posted one more question I heard you mentioned a couple of times that use RPA. So I was wondering if you can share if you're allowed to share whose RPA you use there hot space, interesting to watch that. And yeah, for everybody that else has joined we are just about on time here. I'm gonna be kicking off the next session on track two. So if you go back to the sessions page you can then do the next one is OKD and what would they're doing in Azure. So again, thank you so much for joining here. And yeah, we'll be on to the next session. Thank you again, Felipe. Cool, awesome. So really excited for this session. We're gonna be talking about OKD and the move to the Azure open shift. So here we are at the top of the hour. So without further ado, I will get us rolling. My name is Joseph Meyer. I'm an electronics engineer and cloud architect at the company Rode in Schwartz, some German Munich located company. I'm an OKD user since 2018 together with my team. And this is a story how we came from OKD to open shift in three years. We had started a digital transformation program in spring 2018. And the goal was to get the skills in my company to build up digital business. And one of the first goals was to create an MVP of a cloud product for a trade show that happened in autumn 2018. That's only five months after the start of the program. And this was very tough for us because we had experience with Docker but not with Kubernetes. And it was clear to us that we wanted to do that with Kubernetes. And the first task was this MVP was to provide Kubernetes clusters, two ones, one on-premises for our developers because we have the policy in my company that no source could ever has to be available in the public cloud. So we created, we had to create a cluster on-premises for our developers so they can access the source code and do builds for their artifacts. And the second cluster should be in the public cloud. So our customers can access them because we don't serve our software from our on-premises cluster to the internet. We have separate clusters for that. That was the goal and the first task and the race started. We had a few requirements for that. At least there were three very important ones. The first one was don't pay any license fees for the Kubernetes distribution because we started with our digital business and we didn't want to have a burden of the license fees on them. And the motto was let the business grow first. That was the most important requirement in the beginning for us. The second one was the system must be stable. That's obvious, but we learned that it's not so easy to achieve. We must take, and the distribution should take care about everything that you don't want to mess around normally with this networking with storage and a few more things. We learned a lot about that as a hard way that it's not easy to maintain these things if you have to. So yeah, and also if you look back, it's one of the biggest and most important requirements. You should take into account if you choose Kubernetes distribution also. The third requirement was that we would like to have the same stack on-premises and in the public cloud and the same user experience. So that our developers don't have to switch around in their minds with the usage of the tooling and independence of if they use the on-premise cluster or the public cloud cluster. We wanted to have a look and feel that's the same everywhere. Then we went into an evaluation phase. Five months is very tough, so we rushed through that very fast. First, we tried the obvious. We used vanilla Kubernetes to create our first clusters and had to take care about everything on our own storage, networking, usability was disastrous in the beginning. And so we gave up very soon. That was not the way we wanted to work and together with our company, though we were searching for something better. So we tried out several community-driven Kubernetes distributions. I don't want to name them, but we had mixed experiences. We had problems with stability. I remember one tool that had an automatic installer for clusters and every second installation failed because of bugs. User experience was not so good on the others. So we had no good feeling that we are on the right track. That was a very tough time for us. During the evaluation phase, it was a pure coincidence that we attended a sales presentation for OpenShift because OpenShift violated our most important requirement that we don't want it to spend money for our Kubernetes cluster. You remember, we did not want to have the burden of license fees on our digital business, but it sounded very good what we heard here. The salesman did a very good job in this presentation and he told us about that there is a free edition or community-driven edition of OpenShift called OKD and this is something he never heard about during our research. And yeah, it was awesome because on the paper, it was free, it was a turnkey solution, very similar or at least almost the same as OpenShift regarding the features. It took care about storage network, had a nice UI at that time and great DevTools took care of our builds. Everything was integrated very good in the web UI. It was great for our developers. We also got very good feedback from them. And the third one was set OKD3. We could install it everywhere on-premise on our vSphere clusters and in the public cloud in Azure. It was very easy to get clusters running. We had lots of configuration options. We had Ansible out of the box coming with OKD that did the installation. It was great. We tried it out. We used it for our MVP in the end and yeah, we successfully developed our MVP on the trade show running on OKD and yeah, management was very happy with us and it was a cool time. It was a very stressful, but we learned lots of new things during this phase. A year later in 2019, we delivered even more cloud products. Yeah, we were the heroes because we enabled all of them with, yeah, it was a great distribution. In 2019, everything was cool. With OKD, we were very happy. We didn't regret that we chose it. Also in 2019, we improved and automated our cloud ecosystem because for the MVP, we have taken lots of shortcuts and workarounds because we were not so experienced with Kubernetes. And the next goal was to automate everything. So we found lots of tools that helped us a lot in this phase. Ansible, we had experience with that before. I found Terraform. That's absolutely great tool for creating infrastructure on different look. Yeah, with different providers, it's available for vSphere, Azure, AWS, for everything. You can imagine. So we used Terraform to create the infrastructure and Ansible to install and configure OKD. Then we created the ICD pipelines. I liked a lot that OpenShift had great support for Jenkins or has great support for Jenkins. Everything is tightly integrated in the Vapio items. Nice. And also we created our first service, a self-service portal. That's a tool running on our cluster that provides our developers simple wizards in a Vapio user interface where you fill out a few fields and get tasks done on the cluster. Like setting up a CI-CD environment with Jenkins with the proper secrets, everything completely automatically set up. People liked that. And yes, it was very cool. We learned a lot at this time. In the beginning months, we learned that the last release of OKD 3 occurred in autumn 2018. No new version came out. At that time, OpenShift 4 in the beginning of 2019, I think OpenShift 4 was released, but no OKD 4 was available. And all over the completely year, no OKD 4 was inside. And this was a problem for us because more and more tools did not work on OKD 3 because when each version, I think it was 1.11 got 2.0 for lots of tools. And we had to wisely choose which tools we use. CISA was manageable, but we were waiting for something new for OKD 4, and it did not come. So we started to learn what's blocking the release of OKD 4. I myself was, I tried OKD 4 Alpha in November 2019. I remember that because I had a colleague of mine. He was a master of our DNS server, and he spent Saturday evening, or Saturday night together with me in a Skype session to set up everything we need for OKD 4. He helped me debugging the first steps. And in the end, it worked. I saw a web UI. I was so happy. I remember that this web UI was so much advanced over that we already be laughed with OKD 3. It was so much better. And, but it was not easy to get there. I had to do lots of mental steps. Hacking around in the US, in the Linux console to find problems, why the installation failed, and it was an alpha, it was okay. And yes, and it worked on vSphere, very good. If it ran, it ran very pretty good. And I dove deeper into development. I found this OpenShift Dev channel on Slack. And I also found out that there is an OKD working group. At first, I thought this is a closed club of Red Hat employees, but learned very fast that everyone who wants to help can attend this working group. So I did. And the goal was to help or do my best what I can do to bring OKD 4 live. And yeah, that's what I did in 2020. I started helping with OKD 4. So I created a few fixes for the installer for Azure. For example, because Azure at this time was not supported by OKD at all. Because there were a few problems with Fedora Core that is used in OKD. In comparison to Red Hat Core that is used in OpenShift, there were a few problems with that. No big ones, but this was my first attempt to create pull requests to the OKD 4 community GitHub reports. And my first PR was so big because I also patched Terraform code. And it was far too big. And Badim Rodkovsky, one of the main supporters of OKD was refusing it. He used some nice words. I don't remember. I was sad that it was refused. But yeah, he told me it was too big. I understand that I created a much smaller PR. And this one was accepted then. And Azure was available for OKD. This was my first steps. I did lots of testing it that at that time I built up a homelab at home with Horizon PC and 16 cores. I never used them to that level, but I wanted to be sure that I'm not blocked by anything. I did lots of testing. I also organized a vSphere license. There is some trial, no, it's not a trial. It's called VMware User Group. I don't remember exactly the product name. It's available for 150 euros. It's very affordable. I did all of that because I wanted to get OKD for life. And I reported lots of bugs, fixed several of them. Not all bugs are so complicated to solve. I found out. And yeah, so this was my, was a time where also our team learned much about the insights of OKD for that we can use the mechanics to almost solve any task we wanted to achieve. Yeah, it's a great thing. I also did something that may sound a little bit crazy, but I created a T-shirt for the working group video meetings. I always attended them regularly. And the idea was to increase the release pressure. If everyone always sees this OKD-40A on my shirt, it's what's not so, it was more a funny idea. And I promised to not change the shirt before the release has been made, but it took a few months. Yeah, I have to admit that I changed the shirt in between and never told that to anybody. Finally, OKD-4 was released in July 2020. Was very great because we had already prepared OKD-4 clusters on premises. We installed everything and only were waiting for the GA signal. A few months before I discovered ARGA-CD, that's a tool for GitOps. And I found out that with OKD-4, it's very easy to configure things with GitOps because there are operators everywhere. And you can use also custom resources. That's a configuration method of operators with GitOps. This is also great. So you have everything in Git, no scripts running once and developers are changing configuration at no time. Nobody knows afterwards who has changed what because everything is in the cluster. Git is a single source of truth. That's nice with ARGO and especially in combination with OKD-4. We changed our service portal to use GitOps both of that. And also we migrated all on-premises apps from OKD-3 to OKD-4. We had to change the routes and other few things. Also DNS name for OKD-4 contains, I think, a part that is called apps in the URL. That's a little bit annoying, but yeah, we had to change that for all our apps. And in the end it worked. Since July 2020, we upgraded OKD-4 on-premises very often. It almost always worked great. Between OpenShift 4.6 and 4.7, there were a few hiccups, but we could always fix it or find workarounds together with the community around them. Yes, since 2018, we attracted many of our developers to start a Kubernetes journey on create a digital business on our Kubernetes platform. That's great. I counted last week that we had onboarded more than 50 projects, not only playgrounds, but real projects on our OKD clusters. And it's available for more than 2,000 developers in my company. It's running very stable. But we are moving more and more business critical applications to our OKD clusters. We have a big manufacturing, a few manufacturing sites to be more precise, that want also to use Kubernetes and the cloud services. And that's why we decided to invest at this time in commercial support, because we have digital business running. We have a lot of interest in my company. We have business critical applications and we always say that this should be the time to invest in commercial support. And we did that a few weeks ago. We started creating an arrow cluster. That's the abbreviation for Azure Red Hat OpenShift on Azure for our public cloud cluster. That's the customer facing one. And on-premise, we invested in OpenShift or there is also what was the name of that, OKE OpenShift Kubernetes Engine. It's not OKD, it's OKE. Don't ask me, why is there something so similar? OKE is a version, it's OpenShift. In fact, you have support, but not for everything. And we are not using all the features of OpenShift at the moment for all our environments. And because of that, we chose OKE for some clusters and OpenShift is then the full-fledged version for the services we need full support. And yeah, for the moment, we are very happy with this decision. And to conclude what I told you in this presentation, I am absolutely thankful and to have OKD during our journey. It helped us tremendously to launch our digital business. In our opinion, OKD is a great door opener for OpenShift and enterprises because you can have OpenShift with zero risk and to start your digital business. Yeah, you have the same user experience. A few things are different regarding upgrades because in OKD you only have a rolling distribution. This means that if something is fixed, it won't get backports. It's always going forward. In OpenShift, you have several stable or forced channels. And yeah, but if you don't need that and for in the beginning, you don't need that to be honest. Then it's a fair deal to don't pay any fees and you have a full-fledged, great Kubernetes distribution. And I can congratulate Red Hat for the decision to have a community version of OpenShift in the program because it's, as I said, I think it's a big door opener for their main product, OpenShift. I have to say thank you to everybody from the OKD community and Red Hat who helped us in the last years that we came to this point. And special thanks go to Wadim Rutkowski, Christian Glombeck and Diane Muller. They always were very helpful. And yeah, Wadim especially, Wadim seems to be online 24 seven on Slack and without this guide. Okay, thank you. Yosuf, that was excellent. Thank you so much. Yeah, apologies from all my peers in marketing when we get the OKD versus OKE versus OCP versus Azure OpenShift, it definitely can be confusing. As you said, at the end of the day, when it comes to OpenShift, we do try to make the bits all compatible in the same so that OpenShift is OpenShift is OpenShift. But I definitely learned a lot from your session and yeah, thank you so much for all your contributions and definitely glad that you got to change shirts eventually. Thank you. Yes, does somebody have any questions about that? Yeah, we've got a couple minutes left. If anybody has, please post in the chat, post in the Q&A. Yeah, we do see the way these go. The video finishes and I think there are people, I see them starting to transfer off to the next one, which is going through a bunch of the tools like Podman, Builda, Quay, Scopio, Clare, and Sourced Image, so. Absolutely. But yeah, that was excellent things. As always, these will be available on demand. Share them with your team. Definitely lots of good learnings there. All right, I'm gonna head on over to the next room. Thanks so much for joining. Thank you. Bye. Excellent. Here we are in the third session. Thank you, Chris, for hopping along with us here in Hoppin. Happy Star Wars Day for those that I haven't already wished it to. We will get started here in, yeah, we're actually gonna get started here in just a second here. So please chat, Q&A are all running and thank you again. This is Mihain Cavetti. I'm a CTO and STSM for IBM, working at the Cloud Solutions Center and Rahat Synergy Office. I have here with me today, Elif Sameddin. She's a Rahat certified architect in infrastructure working at Takeoff Labs. And we're going to talk about Podman and the container ecosystem available with Rahat Enterprise Linux. We're going to introduce and share some of the lessons we've learned in adopting this tool set. But also some of the cool tools we've discovered and ways of working in setting up CICD pipelines using Podman, Build, Quays, Copio, Clare and a couple of other interesting tools from Rahat. Just to give you an overview of the agenda, we're going to cover a couple of team-driven use cases, how an end-to-end container build looks like and share some of the experiences we've had in adopting these tools. But also working with Podman to replace other container tools as part of a pipeline and sharing some of these resources and links that you can follow up in adopting similar tools. Just to give you an overview of the use case we're aiming for is how to set up Podman for an end-to-end container build and also how to replace tools like Docker Compose in getting started with container orchestration. So helping you on your journey towards OpenShift or Kubernetes platform by using the same kind of YAML descriptor to build and manage pairs of containers working together. A couple of words about ourselves. So I'm a CTO working for IBM. I'm a Rahat certified architect in infrastructure and I'm an STSM as well. And I work a lot with mines, but also I volunteer in working with universities and students and helping them get started with containers, container orchestration, Linux or software development practices. To be honest, some of the cooler things that I've learned I've learned just working with students or with young development teams who are just getting started with this container ecosystem. I actually have two groups of students working with me this year at Trinity in Dublin and I've given them a challenge. I wanted them to build an application as a modern development team would and select whatever application they want to build. A team decided to build an application to perform various analytics on COVID statistics. Another team decided to build an application to support musicians or podcasters or vloggers in getting full audio connecting it to Raspberry Pi and performing various things. What did these two teams have in common? Well, they've both selected Linux as their operating system and they both decided to containerize their CICD pipeline which was interesting, right? It doesn't matter what kind of application you're building at some point you're gonna end up using either Linux or containers in your CICD pipeline. There seems to be a trend in modern development. And given that they were building this application and working together as a team I requested that they start to migrate what tools they were using, refactor and adopt podman and move towards open shift. I've provided them with some credits. I've provided them with access to the developer sandbox for open shift. And they've also started to adopt and to migrate towards that platform. And they discovered a couple of very interesting things that is very easy to migrate to this new set of tools but also beneficial. So I'm gonna combine the experience that I've had working with clients adopting the tools myself but also working with students or volunteering in the community. And here with me I also have Elif Sameddin she's a Rahat certified architect in infrastructure level too. I think she's recently got her podman based exam list this one she took. And I'm gonna let her, hopefully she's gonna give us a few words about herself as well. So Elif, floor is yours. Thank you for the introduction Minkai. Well, there is very little for me to say now. Other than that, thank you for having us. I'm happy to have this presentation with you. Thank you Diane for having us as well. And what I would add besides everything is that I'm really passionate and I hope to be a promoter for adopting podman and friends. That's awesome. So let's look at the scenario that we're going to work on together. So I'm gonna take a step back and play a less technical role in the presentation today. Elif is gonna do all the heavy lifting and hard work and the hands on components. I'm gonna talk to you about the end to end process of building containers, pushing them towards a registry and how this would look like from an ecosystem perspective. So first we have podman. What's podman? Well, I guess it's in the name. Right, it starts with pod and it's a container management tool designed to manage pods, containers and OCI compliant container images. And it's different from existing container technologies that first you can run it as a regular user without requiring root or without requiring a daemon. It actually uses the fork exec model. So if you've run a process with an end at the end and creates a fork a child process, you'll see that in actions. When you create a container with podman, every container of that would be a child process and it uses namespaces and C groups and SELinux to secure your container and isolate it. But the interesting part is that it also lets you create Kubernetes definition using podman generate and podman play and then it will show us how those work in practice as well. Before you start wondering, hey, is this a new container technology? Well, we need to remember containers are Linux. It's the ecosystem and the tools that you use that leverage the namespaces and C groups and other features of the Linux kernel that provide the isolation and provide the user experience and podman like any other tool is a OCI compliant container implementation. So you can pretty much hot swap it for existing container technologies that you might have been using with podman come a couple of friends, right? One of them is builder. It's designed to build container images from the CLI or from a Docker file. It's also designed to run rootless of course and it can be used to securely build containers in a lockdown environment. And it's a bit different than using a new simple Docker file to build a container. It's easier to script. So if you know how to write a shell script, you probably know how to use builder. Scopio is another friend of podman and scopio is a very, very useful command. If you've ever been wondering, hey, I have this container, I built it on my machine. How do I log into another container registry and copy it here or inspect it or sign it or verify the image signature where scopio is the key. It can copy images from, well, pretty much any container storage or tar file to any container storage or locally and you can do various things like inspect the image, verify the image, sign it, verify the signature. It's a very powerful and useful tool. Speaking of container registries, let's talk about some of the sources from where you're going to be able to get container images. Now, I'm guilty of this myself, right? I've created containers that I've decided to use for myself and I've published them onto a public container registry and then I've never updated them. I know they're insecure, I don't care, right? I've decided for a single use, I've used them a couple of times and when I checked my image, it has something like 30,000 downloads. I was like, well, that's not me. I hope whoever downloaded a container image isn't using it in production and here's where these container sources come into help. The RAD Software Correct Collections Library, or RH-SEL, is designed for developers that need the latest and greatest, they're using ground, but you may not have the latest tool, you can use this to get it. The RAD Container Catalog is slightly different. It's full of certified, created and verified images with a QA process in place that are upgraded on a regular basis to avoid security vulnerabilities. So if you're looking to build your container images for production, that's one huge lesson learned, right? Just go with this. Make sure that you have a trusted and maintained container image as your source and QA is a private or a public container registry that you can use as well. Now, I know that in a lot of cases, you're thinking, okay, but what if I'm building my own images? Where do I start? And I would advise you to start with UBI, which is a OCI Compliance Secure Base Image, it's based on RHELP, so you can get UBI in version seven or version eight. It's really distributable and it's a bit more than just a base image. It's basically, and I'm gonna show you here, you can get the universal base image, so you can just get a minimal or a innate image for those of you who may be using something like Ansible Molecule and you want to do some testing on your image and you wanna test things like starting a service. Well, that doesn't work in a regular container, but if your container is a UBI init container, as a system D, and you'll be able to start and stop more than one service on that image. I know best practices, you're supposed to put one microservice in a container, but that's not always the case. Maybe you're developing and you're building a test framework and rather than wait 10 minutes for a VM to spin up as part of your CI CD pipeline, you're testing an Ansible Playbook, you're gonna run it against your UBI init container and you're gonna do it in seconds. Another example would be migrating or translating a legacy application which happens to have multiple services that need to start. Now, UBI and UBI minimal are just what it says, right? It's either a rel or a minimal image, which is your foundation on top of which you can get language runtimes, you can get your Ruby or Python, PHP or Perl, whatever language runtime of choice that you want to get started from that step on. And again, all of these images are supported by rel and OpenShift. So as you're building your foundation for your images, you know that you'll always have updates and you're gonna be able to start from a secure image from the get go. And this is one hard to learn lesson in practice, right? It's easy to start building with a simple image, but the moment that you start putting these images onto container platforms that you need to look at security, whether or not you're running processes as route and why should you be running as route? Don't mind the T-shirt, you really shouldn't. It's gonna be very hard for you to refactor once you've accumulated a lot of technical debt. So here is where you'll be able to see that the image is secure, what packages are inside it, how it was built, what the Docker file is, and what's the health of your image. Another thing we found to be quite interesting when working with development teams is, typically, development and operations, even though everybody's saying they're doing DevOps, tend to be different teams, right? And development teams might use something like Docker Compose or might use some shell scripts to quickly put together an environment that consists of more than one pod. You know, take the typical scenario WordPress and MySQL or MariaDB, two containers, one for your web application, one for your database, some kind of a YAML file to glue it together with Docker Compose, and you're good to go. But that's not really the way you're going to deploy and manage and build your application when it goes to production. You're probably gonna put it on OpenShift or some other Kubernetes platform. So why would you want to maintain two systems to make this work? Well, if you already have a Docker Compose file, you can use Podman Compose. It depends on Podman and Python and PyAML. Get it, use it, it's compatible, and you can run an unmodified Docker Compose file rootless. But if you don't want to maintain these two versions separate, then you can use the Podman Generate and Podman Play to generate and create Kubernetes definitions. So you're still creating that YAML with one or more containers, but now they'll be useful for you, not just in development, but also moving forward as you deploy your application to production. So if you just want to use Podman or Scopey or any of these other tools, chances are they're already available with your distribution, I'm gonna encourage you to check those out. And if you have any questions, shout out, we're available on Twitter or other media. With that being said, I'm gonna hand it over to Elif who's gonna give us, say, hands-on demo of some of these tools in action. Thank you, Mihai. I wonder what it would take for me to convince you to give up your root privileges. I'm in the T-shirt. Okay, I'm gonna start sharing my screen now. So as Mihai was saying before, Podman enables us to manage container images, containers, and even pods. What I'm going to do next is to give you some examples of demonstrations or better said, stating why Podman provides us with a more secure approach to run containers. Another aspect I want you to keep in mind is how are the processes inside the container actually running? What is interesting or what Podman has brought is it made possible for us to run rootless containers. But what are actually these? And we'll explore this in the next two examples. So I have a Fedora workstation. This is a Fedora 33. And I'm running everything under my regular user, Elif, having the UID and GID 1000. What I'm going to do, and also I have Podman installed. This is a version 301. Let's start the container and I'm going to take Mihai's advice and start even with an UBI 8 container. For that, Podman run. I'm going to start this in the touch mode. Just give it a random name. For example, Podman 01. Let's choose an UBI 8 and I want this to keep running. That's good. So my container is up. Let's check that. Keep in mind that I have started this container under my user. The question is how is the process inside the container actually running? In order to find this out, I'm going to connect to the container. Okay. And as you can already see, I'm already connected as root. Another, the processes are running as root. What would be interesting to explore next would be how does the host actually see the container? What kind of process does this see? Because Podman creates containers as its direct supercesses. And for that, let's just prove it. It's much easier. Okay. So I started the container as my user. Within the container, I'm root and the host sees this process under my username, which is Aleph. Is this secure enough? What did you say behind? Almost. I will say I'm root. I think we can do better than that. Oh, yes. Of course. Of course, of course. So I'm going to take root out completely next. Okay. Let's start another container. And this time I'm going to start from the same command, just give it another name and start the container with the user sync. Okay. My container is up. Let's confirm it. This is Podman 02. Let me quickly connect to it. And this time, well, hi, I'm sorry to disappoint you, but this time within the container, we have the user sync UAD5. Well, what does this work? Just maybe one interesting comment here. One of the lessons that we've learned is as teams move and start to adopt OpenShift or start to adopt an unprivileged container platform in production, they always end up looking back at all of the images they've previously created and going, oh, we can't use any of these because we've built them in an insecure way or we've designed them to run only with root or to bind to a privileged port. So what Aleph is doing here is really awesome. You're going to learn these lessons and you're going to learn how to properly build images in a secure way from the start. So you don't end up accumulating all of this technical depth. Thank you, Mihai. Okay, I'm going to disconnect now. And what I want to prove next, and I want all of you to keep the following in mind. I have started two containers and I have started from an UBI-based image, more exactly, UBI-8. First time in both cases, what is common to these is that I have ran Podman as an unprivileged user, meaning my own personal user. In the first case, the processes within the first container, and that would be the first case, were running as root and the host sees this process running under my username. In the second case, the process within the container was under sync user and the host sees this as a completely different UID. So the second case demonstrates that we are able to run containers as regular users and the container runs in rootless mode. So non-root within the container. So Mihai no longer is happy. I have taken out his root privileges now. But that is not all. The second side would be the fact that Podman allows us to work with, guess what, pods, actually. And this would be an alternative to Docker compose. In order to do that, let's create a pod. Currently I have none. So we're starting from a clean slate. What I'm going to do next is the following. I'm gonna start the pod. I'm gonna expose port 80 and I'm gonna start two containers within this pod. A database container and the WordPress container. And then we have our WordPress container. Furthermore, we're gonna use this to generate a template to be used further with either with a Kubernetes cluster or with the Opshift cluster. So let's give it a start. I'm gonna choose a pretty dummy name. My pod. Map port, I'm going to map port 8080 of the host to port 80 of the pod. What is interesting that doing sport mapping at this point, I will no longer be needing to do this for the containers. Let's ensure our pod is up and it is indeed. I have only the previous two containers up because I did not stop them. Oh, and it is actually three containers. This is the post container which automatically is started and created within the pod. And next, let's try a MariaDB container. Okay. The name of the container is very basically MariaDB. The flag minus minus pod allows us to specify that I want this container to run within the pod I have previously created. I have passed the regular arguments necessary for this container to start and specified the image, the necessary image. So the container should be now running and it is indeed. And the next one would be the WordPress one. Okay. Same. My container starts within this pod. What is interesting and I would like to point out is the fact that I have specified the WordPress DB host to be local host. In this manner, the WordPress container will be able to connect to the database one. And this should be up now and indeed it is. So what would be the first block post, Mihai? I guess the first block post is wait, I don't have to type a lot of YAML to connect to containers together. Well, I'm gonna pass this WordPress to you to give you this joy. And well, I'm not able to write so much YAML without having typos. I'm one of the persons who does typos almost every time. And this is why I am in love with the feature of generating templates using Podman. And for that, and I hope you'll find this useful as well, I'm gonna generate that Podman Generate Cube is one of my favorite commands using Podman. Let's say WordPress, for example. And voila, this is my template generated with Podman. So how do you want this, Mihai? It would be helpful for Git. Commit it to GitHub, you'll be able to find... I will, this is my present to you for this end of the demo. Hey, that's excellent. Let's leave the folks here on the call with a link where they can find the presentation if they want to get started or they want to continue this. So that's it, demo done, Alef. Awesome. Mihai, Alef, that was excellent. And glad to see that's up on GitHub. Of course, the presentations, the videos will also be put up on YouTube, on the OpenShift Commons. We do have a couple of minutes. If anybody has any questions, please post them in chat, post them in the Q&A. Mihai, what's behind you? I see like sound deadening. Is that like sound foam and things like that and just some cool lighting? Yeah, yeah, actually, it's theme lighting, so... You get to play with that. I can play with that. So I'm going to do a role, so I'm working in IBM, but I'm responsible for our solutions. So if we have IBM theme call, I make it blue. If we have a call, I make it red. This is now just a dual zone. There we go. Not as cool as your helmet, though. I love that. Well, thank you. Well, thank you. Yeah, when the red helmet became available, it was definitely something I had to grab. So it's my son's, but I told him when he's not using it, it's a good backdrop, especially on Star Wars Day. Any questions from the chat? I do see one question about secure by default. How can we ensure UBI images are secure? Because, well, Rahat only patches non-vulnerabilities. That's true. Usually that's how it works, right? You're not going to get any kind of vulnerability scanner that has the ability to identify those unknown vulnerabilities. We would advise customers to have some kind of security team or penetration team to actually poke at the images. But the best practice I can be here, I can give you here, is to minimize your attack surface. I see a lot of container images that are built with too many packages inside them with too many components to update, but also with too many open ports or even greater. Ports are privileged ports or greater permissions than they should be or running them as route. Try to get rid of the stuff in your container build process. Adopt a GitOps approach to build a file and then trigger a pipeline to build your images and include some kind of a security scanning as part of that pipeline. You're not going to catch everything, but if something happens, you're going to have clear traceability. Yeah, it's so important. As many people are familiar with the shared responsibility model, when it comes to security, one of the security vendors in the cloud space said, we also have a shared irresponsibility model. So as you said, if you can be conscientious about what you're doing, not pulling the other things, boy, surface area, a big thing. As we all push out the edge, we get an even greater surface area attack, but that's even differently. Awesome. Cool. I do want to thank Elif as well. She's somewhere on the call. She's going to get on the moderator chat. Not quite sure why, but she's done all the heavy lifting on this one. So I've just done the speaking bits. So thanks, Elif. Yeah, thank you, Elif. It was great to see that too. Okay. Well, hey, if there aren't any more questions, we'll leave the chat and the Q&A open, and I'm going to be heading on over. Submariner is the last track, last one for track two. Awesome. Sounds good. All right, thank you. Thanks, too. Cheers. Awesome. Everybody's starting to trickle in here to the Submariner session, which is the final segment of track two. Marcine, thank you for joining us. Yep, you're on mute right now. Excuse me. Thank you. No, it's all good, and I will have you go back on mute when the video starts there. But the chat and the Q&A, thank you for joining us. I'm definitely looking forward to this one. I heard I had a little bit of, I don't know if you caught the opening main stage session, but Rob Somsky was talking about Submariner a little bit in the demo that he did. No, I couldn't attend. Well, the good news is, it will all be available on demand after. So if you want to catch that, you can definitely do that. Steven, thank you for joining us, though. Hi. And as I was just saying to Martin, when we start the video, all three of us will go on mute so that we can do the chat and the Q&A there. Excellent. Are either of you doing anything to celebrate Star Wars Day today? No, I do have a question about... Yes? Do you have a chat and you have Q&A? Yes. What do I need to focus on? So I'll put a little red dot if somebody asks the Q&A there. For the most part, there tends to be more activity in chat, but the Q&A is where, if somebody has a specific question, they'll put it there. Okay, thank you. I'll prompt you in the chat if I don't see an answer in there, but good stuff. All right. Looks like we're starting to get... Yeah. Yeah, I'm outside of the Boston area, so when we started this, it was pretty gloomy out. It's still a little bit dark, but the sun's up, but it's cloudy. I do miss the... KubeCon Europe was a lot of fun. I did the Barcelona one back in 2019, of course. Oh, yeah, me too. That was a great session. Yeah, great city, though. Yeah. Yes, it's now been what? 14, 15 months since I've been on a plane, so... Yeah. They said, don't miss the hotels, don't miss flying on the planes, but I do miss the people, I miss the culture, I miss the food. Yeah. But yeah, excellent. We're scheduled, still seeing some people coming in, so we'll give it, we'll wait until the top of the hour and I'll kick the video off. All right. It's all gone mute here. Martin, we'll have you go on mute too, and then we'll kick the video off here in a few seconds. Thank you. Hello. Welcome to this talk about opening DCN's journey to connecting OpenShift clusters securely and transparently with SubRainer. I'm Stephen Kitt, I'm a software engineer at Red Hat, and with me is Martijn. Yeah, hi. I'm Martijn Struyman. I'm working as a cloud engineer for the Dutch government. Yeah, and so we'll start with an overview of what SubRainer is so that you can understand exactly what we're talking about. SubRainer is a project that is designed to connect multiple Kubernetes clusters at the networking level. It does so by exposing new customer resources that are stored in the standard Kubernetes data stores, and I'll go into that in more detail in a little while. It's an open source vendor neutral project that was started by Rancher and is now maintained by Red Hat with help from contributors from a number of different companies. And one of our big goals is to make it easy to set up and use. And part of this is we make it available for deployment using an operator or Helm charts or our own tool which is called SubCuttle. So some common use cases for SubRainer are application availability. So this is where you have an application that's available inside one cluster and you would like to use it in another Kubernetes cluster but you can't easily deploy in the other cluster and you don't necessarily want to make it available using publicly accessible endpoints. SubRainer can help you with that. Another use case is disaster recovery where you want to basically ensure that your data is replicated in a number of different geographical areas, for example so that you can recover if you lose one of your clusters. And another big use case is data residency guidelines. So you might have workloads that are available in a specific or that can be processed in a specific area or you want to be able to use for example cheaper CPU time and one availability zone but your data has to remain resident in a specific geographical area for legal reasons. So SubRainer can help you with that too. So what SubRainer does is help you with the networking there and so to understand that we need to go over briefly what Kubernetes does. Kubernetes has a very cluster centric view of the world by default. The cluster boundary is a hard boundary and so you can see here a fairly typical view of the network stack in Kubernetes. You have a number of pods inside your cluster. Each pod is connected using pod IP networking so each pod has an IP address and can communicate with other pods in the same cluster on a private network. Each pod also benefits from service discovery and load balancing. Workables can be made available as services and pods can address those by name and have load balancing among multiple instances of the same service. And finally you also have network policy which allows you to define rules where to say which pods have access to which parts of the network. And SubRainer extends this across multiple clusters so it doesn't touch the pods model itself. Pods are still private to each type of cluster but it extends the networking data plane across all clusters so that every single pod can have access to any other pods in any other connected cluster. And finally it also extends service discovery and load balancing so services can be made available from one cluster to another and load balanced within each cluster. So typically for example a cluster would connect to any remote cluster which makes the service available unless the service is also available in the local cluster in which case it would provide the local cluster. And it also extends network policy because the networking layer preserves source IP addresses you can have rules which take into account the problems of traffic. And it does all this as you can see with the padlock and a secure fashion using IPsec tunnels by defaults and also supports wire guards. So all the traffic is encrypted between the clusters. What are the benefits then? So as we've seen Direct East West pods to the pods and pod to service routing across clusters. It works with any Kubernetes cluster and it can work with a number of different CNIs. So we test with OVN for example which is the default now in OpenShift. Also Weave. There's some work that's been done with Calico and a number of other CNIs and we'll give links later to the documentation you can look at that describes all these. Services can be deployed across clusters. So this is the basic level just pod to service IP connectivity but also as I mentioned service discovery and network policy. And it provides all this in an encrypted fashion as I mentioned. Next slide please. Here's the diagram of the high level architecture of Subrainer. So the top section is what we call the broker which is one of the clusters that can be a dedicated cluster or just one of the drawing clusters which is used to store and share all the data that's required across all the clusters. And each cluster that you want to integrate as part of sort of larger set of clusters using Subrainer is connected to this broker and uses that to obtain information about all the other clusters. And inside each cluster each node gets our route agents. This is what takes care of traffic that has to go to another cluster and make sure it gets to the right place. And at least one node in each cluster is designated as our gateway. And so that hosts the gateway engine which takes care of the tunnel to all the other clusters. And so traffic gets routed from each node to the gateway and then across to the target cluster. And back again. Next slide please. Now we're connectivity with Subrainer. So there's no impact on intra cluster traffic. This is very important. All the traffic that stays within that cluster uses the standard network setup whichever that is inside your cluster. So there's no impact from using Subrainer on the local network performance. And it's only traffic that has to go to our remote cluster goes through one of the gateway nodes. And as I mentioned, source IP is preserved which means that well traffic can go back. The other way of course and you can also specify network policies that take the source IP into account. And by default we encrypt cross cluster traffic. And another big feature in Subrainer is that we can handle clusters that have overlapping overlapping siders. So if you set up multiple clusters without unnecessarily envisioning that you were going to want to connect them at some point and so they have conflicting IP addresses. Subrainer can add its own addressing overly to provide unique addresses across all the clusters. And this is called global net. Next slide please. Service discovery. So this is the layer on top of all the service IP which provides service discovery across clusters. And this is actually an implementation of a spec that's not defined inside Subrainer itself because it's part of a larger effort in the Kubernetes multi-cluster service special interest group. And this is called the multi-cluster service API. So that defines a standard or shared rather set of terminology and some shared features. So the first term is cluster sets and this is a group of clusters that are all connected to each other and that share services. And this introduces an important concept in the multi-cluster service base which is that all namespaces with the same name are considered to be the same across the clusters. This is called namespace-same-less. And then the multi-cluster service API defines two CRDs that are used to share services. The first one is the service export CRD. So this is an administrator controlled CRD which is used to specify services that should be exposed across all clusters. Services aren't exposed by default. To export one, you create a service export CRD and this makes the service available across all the cluster sets and then you subdomain under .svc.plutterset.local so it follows a similar pattern to services inside a single Kubernetes cluster across the cluster sets. And then the other CRD is the service import and so this is the in-cluster representation of all multi-cluster service but this is really just an implementation detail in practice. And that slide please. So how, well this is done in Subrainer. Service discovery uses a component that we call Lighthouse. So this builds upon the existing Subrainer architecture with the broker and so on and it adds a number of components. And so the general idea is that when a pod or a service requests a cluster set wide service so it uses the .svc.plutterset.local name that gets resolved not by core DNS or well the main DNS provider inside the cluster because it isn't aware of all of this. Instead the DNS resolver is configured to forward these requests to Lighthouse's own DNS server and so that DNS server is aware of service imports and so it knows about services that are available in other clusters and that have been exported and it will use the information from the remote clusters and from the broker to resolve those services to an IP address that Subrainer can then take care of and so traffic can get to the other cluster. So that was a brief overview of what Subrainer does and some of how it goes about it. But the purpose of this talk was to go over our journey with ODT Nord and so Martino presents what ODT Nord is in just a couple of minutes. But I wanted to do before I conclude my session of this talk to give some of my impressions from working with Martino and his team. So the purpose of the POC with proof of concept with ODT Nord and Subrainer was to get from our perspective early feedback on features and usability. So when we are in this of engineering groups we don't necessarily have much direct feedback from end users and so this was an opportunity to get that and it led to of a big initial drive at first on ease of ease and speed of installation and the importance of being able to upgrade because when we were working with Martino and his team they were very willing to test new things and iterate frequently but that meant that we had to make it easy for them to do so and professionalist because obviously we didn't want to leave them into a situation where they'd upgraded to a bad release and they ended up stuck there or we'd end up spending lots of time helping them to fix things. And one of the big gains from our perspective is that we were able to get a good idea of customer features that were actually useful for customers and so ODT Nord provided us with a number of feature requests that we hadn't necessarily thought about and that were provably useful for end users and so the first one of these which is insecure connections that was actually somewhat surprising for as I mentioned that cross-clutter traffic is secured by default and this was a big feature in some arena from our perspective but it turns out that it's not desirable in all cases and so in some cases end users can have private connections between Kubernetes clusters and so there's no point in adding extra layers of security on top of them it just reduces the network performance so this was a big request from ODT Nord it's taken a while to implement but it's actually landing in the next release of SubRainer another one that went with this is Hype bandwidth so ODT Nord has a high performance network physical network and the desire was to be able to use as much as possible of this in SubRainer so this led also our investigations around the Hype bandwidth and bandwidth issues led us to implement a benchmarking tool because we had to run benchmarks and the ODT Nord engineers knew how to do that as well but repeating that all the time was somewhat painful and so we integrated all that into a benchmarking tool at their request and in addition to that so network policies they were an important feature for ODT Nord's own customers and another big takeaway that I had from working on the proof concept was that knowledge is power we benefited greatly from having different people on the team with extensive expertise and all the layers of this stack that ended up being used at ODT Nord in particular the open stack network working layer and so if we had just open shift or generic network experts on the team we might have had trouble helping ODT Nord with some of the problems that they ran onto because they had a fairly complicated open stack set up but that's enough from me and I'll hand over to Mark Time now he'll give you lots more information about ODT Nord okay well thank you Steve let me introduce myself first I'm Tom Staplan I'm a cloud engineer for the Dutch government and ODT Nord is Nord stands for north we are located mainly in the north of the Netherlands one of the foreign Dutch government data center agencies and so in the past we had many data centers inside the Dutch government but decided I think eight years ago to consolidate the two four data center agencies at the moment we have two data centers running in the north the two red dots you see in the picture the land land car of the Netherlands and the first one is available this year so a little bit of a perspective and history from our as a resident customers we are former core web users so in 2017 we started our container journey with Kubernetes because we saw inside the Dutch government many government agencies are moving towards microservices and wanted to use container platforms like a Kubernetes so we thought hey that's an opportunity to to create just to create some kind of managed service for our government agencies and until now we are pretty successful in so since Red Hat took over Coro as we moved to OpenShift so I think since January 2019 we are in production with OpenShift 3.11 there are still some 3.11 clusters left but the rest is all OpenShift 4.6 now and well yeah our main so I think we have about three more than a 3000 cores running on multiple clusters but we have a main we are ready for we want to move to the next step and the next step is make these clusters more resilient for failures so our customers can make their workloads more robust for our Dutch Dutch people so we were wondering and looking how can we make these OpenShift clusters more reliable more high available you just did a lot of reading and proof concepts and well we reached out to Red Hat Netherlands and just please help us how can we do this they came up with the Merida project we did see it when it was at the in the Hansel Rancher and we left it first but then Red Hat took over and we got an opportunity like to explain to work closely together so yeah so we're almost ready for the next step so and what's that exactly is what do we expect to if we just say let's say install OpenShift clusters and all these free data centers that's what we want to achieve this year we want to be able to connect clusters so we can sync data or we or our customers can sync data we want to have pod connectivity as I will explain in my next sheet we are already able to we have a highly high available network with high bandwidth between the data centers it was already present so we were able to communicate between these clusters in these several different data centers but we were not able to do pot-to-pot connectivity so we got a fairly static setup which was not really helpful so that's why we want to have pot-to-pot connectivity we want it to be secure we want to leverage the available bandwidth not a complex configure it had to be cloud slash network agnostic so as Stephen mentioned we are running on open stack but maybe we are using in the future something else so it also has to work on that so it must be open source because we believe in open source everything is open source what we do in our company and the last two bullets low balance traffic multiple clusters and do dynamic health checks so well we thought maybe that's a little bit too much to to do in one year of let's say one and a half years let's focus first on the east-west scenario and now that's what we can do exactly with Sumerian so we have been working with them closely together since June 2020 so I think it was version .04 and yeah like Stephen explained we've been testing new features new releases we could test upgrades we have regular feedback sessions which were very helpful we learn a lot about our performance so this how much performance can we get out of the network and out of the clusters so that also led to several open stack optimizations which were really really helpful we didn't figure out ourselves yet so our entire cloud just benefited from these of the open stack optimizations also non-open-shift container of our cloud so this was a really cool benefit and yeah we made our cloud available to the Submariner team which could just use it to investigate issues or to make things better and even so important it was very cool it's a very cool and professional team to work with so if you ever get this opportunity grab it but it's really it can be really really helpful for your own use case okay something about our infrastructure so yes we're reddit customers so we're using open stack at the moment we are at train we're running more than 6000vms installing open shift on top of open stack so we're all also using the nice open stack apis for standard block storage we also have an encrypted variant using Octavia for server-type load balancing manila for repread-manage storage and a roadmap thing we're probably going to look at the ironic bare metal integrations for next year Chef has a storage layer more than approximately 20 petabytes now we have S3 compatible object storage implement on most regions the red dots and the data centers in the north at the moment are connected with dark fiber they're dedicated fiber channels which has a lot of bandwidth so and this network has been implemented as a layer V network so this was a really good fit for Sumerna so some high-level network architecture at the moment so we have two DCs running active active and the third one is in the making and probably operational by the end of this year and what we want to do is to connect let's say three clusters together so we can make a very reliable infrastructure yeah so I think well I have years ago we had a power outage on DC1 and everything was gone that's not good for our business so we want to make things more robust so these DC links I've been drawing here one has for the DC link between cluster A and B is the connection Stephen mentioned we control it ourselves we already have encrypted that with our network devices so we didn't have the use page there to be able to have an extra encryption layer on top of it so that's the requirement we wanted to be able to turn it off but for the other connection to A to C and maybe to from B to C we do not control the network path so we can enable the extra IPsec and maybe just just in the future we are going to connect we can move as a government public workloads to the public cloud there is maybe also a use case for it so I don't know so I want to conclude my talk with a small demo it's a pro pro gb of two open shift clusters on DC1 and DC2 I will show you in a minute and I've installed a pro pro gb on both clusters and the nodeformer cluster and I will try to create some kind of database well it's going to be created on the other end so I will show you first we have a multiple cluster running so you see here our first cluster you see here gn2 that's DC1 we have our second cluster you see it here is gn3 DC2 and inside these clusters we have installed and so I will show you the gateway which is installed on a this is a pod running on a certain node we do not have dedicated nodes for it yet but we are planning to so if I just use the open shift looking for a gateway so you see how many instances do we have on the gateway I have to look up the right namespace we do have you see two gateway instances running and one of these is the leader so I think it's the this one so if you want to show you you can put it it's a nice feature you can look up in this gateway what the connection is between the the other between the gateways on both clusters so oh sorry you see here we connected this is DC2 you see here we are connected to DC3 and you all have kinds of features we can see the latency the average the last max minimum where the status is it's connected this is the active one so this is not the passive one all kinds of the back end delivers one so all kinds of information which is useful to check if the gateway is alive what's also is nice is the I will switch tabs now is the able to to look at some metrics so Submariner is exposing pre-media metrics if you want to just look the you can plot these latency this connection latency seconds in a nice graph you could use the of course the pre-media workload monitoring which is in a side of open shift for monitoring and alerting on it that's a really nice feature they've added so yeah really cool so now back to the demo of some proof this is really working so I believe we've been installing two the name coppers db db instances so this one is running on DC2 switch the and this one is DC yeah I will look it up this one is on DC3 so you see the pots are running they go on the container creating thing so if I hover over to the cockroach db ui you see it's connected it says nine nodes three pots on each side so you can see these nodes are connected on gn2 dc1 and gn3 you also see this on the other side this is gn3 gn2 gn3 so this proves these these all cockroach db cluster is able to to uh to communicate with each other you have all kinds of nice latency maps also usual to see how fast is the communication between these between these nodes we haven't been looking and tweaking it in detail but we do that later so yeah this is pretty cool thing to show I think it's working just to prove the pot of the on the potting is to create a database see what kind of databases are in this instance we have a demo database I should have been deleting it I will delete it now and agree to create with it so I will go inside a cockroach db keyboard sleeping there it is so I'm connecting to the cockroach instance oh sorry I've been connected now to the to the database instance now let's say I will create a database create great there is I don't know if you saw over on that on the q and a there was a question currently Submariner has support to which native platforms is the question that that that's sitting there yeah I'm addressing that just now awesome you're welcome to come back on video and also say it too because they're probably still watching yeah that is also right so in terms of platforms so if it's hardware architectures we currently build only for a mv64 on Linux the client is available on a number of different platforms including macOS and windows if it's about cni's and we support the default open shift cni which is OVN now we also support cataclyl to some extent although there are some issues there and a few other there's a kind of rough tip of my head okay yeah I remember when when containers came out it was like oh yeah you know it's all going to be easy and I said you know we spent you know as an industry spent a decade trying to get the storage and networking issues fixed in virtualization yeah so you know we we shrunk the time frame a little bit but you know storage and networking have definitely been challenges to to deal with in the container world also all right how to keep this open we've got a couple minutes left here but if anybody has any other you know questions or things in the chat otherwise we are at the end of track two you can catch the end of the main stage if you want to go back there as well as a reminder all of these the presentations will be available online soon after and if you haven't already go sign up to to join the OpenShift Commons so you know appreciate everybody joining and thank you and may the fourth be with you