 Good morning everybody. Am I audible for everyone? Better like this? Okay. Well, we have places up in the front, so anybody coming in, you're welcome up front. We all heard the great conversation with Clayton and Mike and the other red headers about the magnificent future ahead of us with OpenShift 4. Maybe you went upstairs and didn't have a conversation with someone else, just looked outside and they dreamed about today that all is in production and how good life would be at this point in time. Well, enjoy the moment because now we're going back to OpenShift 3, actually running in production. So who am I? My name is Thijs Abbas. I'm an enterprise architect at ING, working in infrastructure domain. And my main focus is what I call ING's cloud native journey. The hosting part, Kubernetes OpenShift, data services in particular, objects and file storage and also all the risk and security topics touched by this innovation. I'm always a bit interested in my audience, so could I see some hands who of you have heard about ING before? Well, that's quite a brand name. I would have expected less after we've been away from the US for a while, but ING, for those of you who don't know ING, ING is a bank. We started out in Europe and basically we invented branchless banking. The ING direct brand started in 1997. We were voted World Best Bank in 2017 and we are trusted by about 38 million customers. And ING is transforming itself and that transformation would only be possible if we had a CEO who understood that the future of the bank depends on IT and luckily we are in that position. And we also have a CIO which can translate that concept into what he calls the concept of one. And that concept of one for the workflow boils down to ING's one way of working. And what's relevant for this presentation is the shift of responsibility. In ING, the best DevOps teams are fully responsible for the stack, which means they consume their infrastructure, which in turn means the infrastructure needs to be fully self-service. Dealing with a paradigm shift. Can I see some hands again who has been in this situation? Dealing with a paradigm shift is hard, especially going from traditional IT to cloud native. People think they understand it, but they don't. You really need to take effort to explain what is changing and how this world is different. And what helps in such a conversation is having a model. By definition it's never correct, but it's probably good enough for the conversation. So what we have is an actual model, which I use in my conversations. I can put it on the table and twist it to every side of the discussion I want to have. So it's six views. So it's a model. It's intended for 12-factor apps. And it's fit for a regular enterprise. Maybe it fits your environment as well. I just did not test it. The purpose is clear. And don't confuse it with the CNCS cloud native landscape, which was shown before today. So the real short explanation. The first side of the cube is actually the DevOps team. The DevOps team is responsible for the code, for the base image, but also for the deployment configs within Kubernetes, OpenShift, and also the configuration of the network services around Kubernetes clusters. The second slide is about data. You do not earn money by code. You do not earn money by hosting. You earn money by transforming data. That's where the value is. So that data is in data services. Examples are relational database, an object store, and very importantly, topics. In this architecture, the topic is the main interface towards the Kubernetes cluster. And on those topics, you have things like reporting, security eventing, et cetera, et cetera. Data needs to be secured. So the third side is about security services. And there's basically division in those services. One half you know from traditional IT, and you basically use in the similar fashion in cloud native. And the other half talking about security event monitoring, vulnerability scanning, technical state compliancy, those are still present, but the interface has changed. And that's caused for a lot of discussions. Of course, you need a hosting platform, an open shift, and that hosting platform in our architecture serves namespaces as a service. We do not give entire Kubernetes clusters to our DevOps teams. If you want to populate that Kubernetes cluster, you need a pipeline, especially if you want to run that cluster immutable. And that pipeline needs to have scanning engines, because in the pipeline you scan for vulnerabilities, and then you enforce immutability into the platform. And last but not least, if you have multiple clusters, you need to glue them together. So you need stuff like firewalling, DMZs, low balancing, service meshes to glue everything together. So it's not just about Kubernetes. You need a clear demarcation between the provider and the consumer. First, that's the namespace as a service. The DevOps team is autonomous within its own namespace, or namespaces. Workloads are immutable, stateless and short-living. Data is persisted externally in data services. Shift left of security controls into the pipeline, and the production cluster is hands-off. Hands, we automate everything. And there's a full explanation in the reserve slides you can download after this presentation. This is just a model. And the answer is no. The container platform, the relational database platform, the event bus, the security services, CACD platform and network platforms are all live and in production. However, there's always improvement opportunities. It's constantly in motion. Object storage platform, we're winning it as a beta right now that will be in production next year. The rest of the presentation, I'll focus on our actual hosting platform. Currently, we're winning OpenShift 311 OKD, and we're migrating towards 311 Enterprise, and we call that ICHP, ING Container Hosting Platform. So what is ICHP? It's a container management framework, part of IPC, ING Private Cloud, and it's designed to host all INGs to affect our cloud-native applications. So we bring self-service to our DevOps team, to our engineers. We deliver a service which is compliant with the latest insights from WISC and compliance. We have a bank after all. And of course, our users need to be happy. So what does deployment look like? Well, what's important if you run your own Kubernetes OpenShift environment, you need to understand what the actual physical layout is and think through what happens if a data center or an availability zone fails. So this is the minimum footprint we came up with, where we can survive a single-note outage without customers noticing it, and we can survive an availability zone outage without the cluster dying upon us. We can even survive a data center dying. People will notice, but the service should continue. We can deploy this on bare metal, or VM, or even in public cloud. Currently, we are running on bare metal only for production clusters. So what kind of patterns do we offer to our customers? The default one is a multi-tenant one. It's a shared environment. It's a paper request. And you get platform compliance evidence as part of the service. We have local deployments. Local deployments are deviation from a group strategy. Hence, the group CAO needs to approve on that one. And valid reasons could be latency or local regulatory reasons. It's a built-to-order, so delivery can take months. There's no instant gratification in this world. We are working on additional isolation in a multi-tenant, basically dedicated nodes. Again, and single-tenant, which is only possible after CIO approval. It's a cost discussion. DevOps team can have a dedicated cluster if they want an isolated full-domain or a performance isolation. But again, it's built-to-order. It's not instant gratification. So what can we offer today? Any DevOps team can request a project via a sales service portal, which gives you a Kubernetes namespace, which requests the amount of CPU memory in both the primary and secondary data center. A dedicated software defined network attached to the namespace per cluster. A dedicated egress IP attached to the namespace. A secret to connect to the deployment pipeline. A registration of the project in the CNDB. And note we do not register container instances, just the project. Of course, you can delete your project once it's empty. And you can request to add firewalls to open traffic to and from outside the Kubernetes cluster. And of course, you can resize the amount of CPU memory allocated to you. And optionally, you can have a dedicated permissions instance per project for your application events. En we're working on these three things. Project request free API calls. Do you want to know more about that? Talk to months. My colleague in the audience. So what are we going to be hosting? In Asia, as of November last year, we went live with our fully digital mobile-only bank in the Philippines. The front-end of that bank is entirely hosted on ING's container hosting platform. If you're in the Philippines and happen to watch this streaming or afterwards, give it a try. You literally don't need to move from your couch to get an account. In Europe, we have multiple application landscapes, and they started onboarding as of this spring. And last month, project there went live. For those of you who follow the likes of Financial Times or Bloomberg, they might have heard about the ING XI Global Partnership. There is the technical implementation of that. And the prognosis is that the majority of ING's APIs will eventually be hosted on this platform. We construct services like ING's digital channels or for detection, data analytics, et cetera, et cetera, et cetera. So, a bit of the history. About two years ago, there was a realization that we needed to do something with this thing called Docker and containers and container frameworks. So we brought together a coalition of the willing. We called it the Docker workgroup, which were infrastructure specialists, developers, security specialists, architects. En those persons looked at a number of questions. One of the verdicts they came back with is there would be no value in rehosting VMs on a different technology platform. If we were to build a container platform, it needed to be intended for 12-factor apps. The second one was there was no value in a multitude of base images. One base image should suffice for an enterprise. And the last one, but not the least one, they advised on running Kubernetes slash OpenShift as that platform. That advice was accepted by our management. It was made a potential standard. And then we found out that a coalition of the willing, people who have a different day job, is not really the best organization form to build a platform. So we basically composed the Tiger team. And the Tiger team started building the actual platform. So we had the resources allocated above, around January 2018. En in about six months, they built the platform for Asia. So in June of last year, we handed it over to the colleagues in Manila. And they started deploying their applications and testing their applications. And as of November last year, they went live. After Asia was live, we had time to focus on Europe. So as spring this year, we went live in Europe. And the primary customer in Europe was the DAIR project. Apart from these two deployments on production, we have about 100 DevOps teams experimenting in our non-production clusters. Those are in various stages of maturity. Some will deploy very soon, others might never deploy. So looking at technology, we started out with origin OKD. As of last month, we decided to go over to OpenShift Enterprise. I think we have now one cluster running in OpenShift Enterprise. Also as of last month, it became a mandatory standard within ING. In ING being ING, we still have other container management frameworks. For those of you who went to KubeCon in Barcelona, there was a team from ING Wholesale Banking telling about their story. They needed to because this wasn't ready. So they made our own solution. So all those lines end in this presentation. We will need to think about how we migrate this to target state. And we don't know how long that will take. In the future, of course, probably Red Hat OpenShift 4. We still need to make a lot of engineering efforts to integrate it into our environment. And since we were running air-gapped bare-mental systems, we couldn't really start until last month. So we have a container squad. What did they spend their time on? In two years time, we had 18 FTE located in Poland, in Germany and in Netherlands. One FTE architecture, me being the 50% part. And one FTE product owner. And the ambition of this squad is to grow the footprint of the container landscape without growing the container squad by automating everything. We expect an exponential growth in an amount of hosted containers with teams to remain in the same size. And if you look where we spend our time on, it's very clear we are a bank. 30% of the time was spent on risk evidence and compliance. Conclusions. Communicating a paradigm shift is hard. Use any advantage you can get. You'll need it. A container hosting service is only part of the cloud-needed ecosystem you need if you want to have that digital transformation. ANG is not aiming to re-host VMs to containers. Our purpose is to have the best possible hosting environment for telefactor apps. There's a slight difference there. If a DevOps team manages to properly re-factor their VMs, they're welcome. But it's hands-off approach in production. If teams do not feel comfortable with it, they should stay on VMs. Er is no SSH session to look what's going wrong. We automate everything. We design for failure. Please don't fail to do a proper design. It will bite you. For us, the namespace is a service switch or way of working, perhaps for you as well. Kubernetes OpenShift is only a very small but important part of the time spent to build or run a container hosting service. That was only 4% of the time spent. It also parted due to the amount we spent on automation, but still 4%. Maybe it's also proof that OpenShift was the right choice. Being in a regulated industry is not always fun. There might be other participants in its room. Tuning monitoring is an art. It takes time to master. Do not underestimate this. Allocate time for your monitoring optimization. And find the right partners, both within and outside your own enterprise. You cannot do everything yourself. We skip de questions. I would like to thank you for your attention. And also thank you for being part of this community. Without this community, this would never have been possible. Thank you very much.