 Okay, perfect. Thank you all. My name is Paolo Paterno. I am one of the engineers in Red Hat working in the messaging team, mostly on the Kafka side. And I am here with Pierluigi and Paolo from Poste, Italian for showing a use case in Poste. I will have a little bit of introduction around Kafka and how to run Kafka workloads on Kubernetes, so on OpenShift in this case, using the main project that I work on, which is Streamerzy. And then Pierluigi and Paolo will introduce their use case in Poste, Italian on using Kafka and even running Kafka on Kubernetes, so on OpenShift as I mentioned. So a little bit of introduction for who knows about Kafka. Kafka is a messaging system, mostly based on published subscribe pattern, but it's also a data streaming platform. So they changed a little bit the definition of Kafka over time, so started as a messaging system, then a data streaming platform, but at the end Kafka is a commit log. It's something like you send some messages and the messages are written inside some file. Now Kafka is a stateful application. And running a stateful application on OpenShift is not so simple. So on one side, we have some features that Kafka has. So has a stateful application, every broker in a Kafka cluster has its own identity. They need to be discoverable each other. They have to talk each other. And the same is for ZooKeeper because for who is using Kafka, a Kafka cluster can work today alongside ZooKeeper and SAMPLE for saving some information around the Kafka topics, the Kafka brokers, and so on. So on one side, there are some features that we need for Kafka. And on the other side, we have something that OpenShift provides us in order to run Kafka on OpenShift itself. Because we know that some Kubernetes so OpenShift native resources, like for example stateful set, we can use them in order to deploy Kafka on OpenShift. We can use config maps and secrets for storing configuration, for storing TLS certificates for example, or we can use a persistent volume and persistent volume claims for handling the storage of the messages in Kafka and so on. So there are something that are the Kafka features that we have on one side and what OpenShift provides us in order to have Kafka running on OpenShift. But there are some challenges and it's not so simple. So as you already saw this morning talking about operators and so on, the best solution for this is having an operator. So instead of having you to create your stateful set, create all the config maps, all the persistent volume claims that you need in order to set up your Kafka cluster running on OpenShift. And then you have to update all these YAML files, all these resources in order to update your cluster and so on. You can use the operator coming from this StreamC project which since the beginning of September is under the CNCF. So it's a Sandbox project. I love to say that it's the way at this time for deploying Kafka on Kubernetes in a cloud native way. So what StreamC provides is a bunch of images, Kafka and ZooKeeper for running on OpenShift in this case on Kubernetes and providing a way for handling, for deploying and handling a Kafka cluster in a cloud native way. So it means that you don't have to create native Kubernetes resources like stateful set pods and deployment but you have new resources. So when you install the StreamC operator you have got some custom resources. You have a Kafka resource, a Kafka topic, Kafka users and so on that looks like something like this. So you can describe your Kafka cluster as a new kind of resource in Kubernetes. You can specify, for example, the number of replicas which means the number of brokers that you want in your cluster, the configuration and how to expose the Kafka brokers outside. So outside of your OpenShift cluster or even outside of it. And at the same time you can describe a Kafka topic. So you want to create a Kafka topic you don't need to use, don't have to use the tools that Kafka provides you for creating topics but you can interact with Kubernetes using the kubectl command or the osc command if you are on OpenShift and you can create a new Kafka topic resource with all the information around the topic. So the number of partition, replicas and the configuration and the same for the user. So if you want to define the rights for the consumer and producer applications in order to write and read from specific topics and so on. So you can deploy and handle your Kafka cluster just handling OpenShift resources in this case which are specific for Kafka. So what you have is this cluster operator which is watching for this kind of new resources. And when, for example, you deploy your YAML file describing your Kafka cluster, the cluster operator takes care of that and create for you the zookeeper ensemble when it's happened running, it deploys the Kafka cluster and then deploying two more operators for handling the topics and the users. So instead of having just one cluster operator in order to focus to one feature, so handling just the Kafka cluster and the zookeeper ensemble for handling, for example, topics and user, we prefer to have other different operators for doing that. And on the other side, if you want to update, for example, your cluster, which is something not simple when you use Kafka, for example, on bare metal because you have to update all the brokers running in all the nodes, you can just update your custom resource. The cluster operator is watching for that and for example, we start the rolling update on the zookeeper cluster if you are changing some configuration parameter for zookeeper or the number or some other information, for example, increasing the number of nodes that you want in zookeeper and the same for Kafka. So it takes care of getting the new configuration that you changed or the new number of replicas. You can scale up, scale down, adding or removing nodes from the cluster and at the same time, then the cluster operator will update if needed even the other operators that can be configured with some different parameters as well. So we have this cluster operator taking care of everything for you for creating, updating and handling in general the Kafka cluster. These are all the features that we have today in this project. So there are, for example, tolerations and affinities. So you can specify that, for example, your Kafka broker can run on this node but cannot run on this other node, which has some taints and there are no tolerations for that. Or for example, you want to run some Kafka brokers on specific nodes for networking interface which are faster than others networking interface. We handle for you mirroring, for example, the operator handle Kafka mirror maker for mirroring cluster, Kafka cluster across data centers or Kafka connect for syncing different system like moving data from one database to another through Kafka for doing, for example, CDC. Or we handle for you Prometheus for getting all the metrics from the brokers, the storage, of course. So these are today all the features that the StreamZ project provides you in order to easily run Kafka on OpenShift. And at this point, I can hand over to Pierluigi and Paolo, who will talk us about their use case around Kafka in Poste and even running on Kubernetes or Bermetal. Hi guys, I'm Pierluigi Sforza, solution architect from Poste Italiane. And this is... I'm Paolo Gigante, solution architect of Poste Italiane 2. Okay, here we are. The first part of the presentation for those who doesn't know Poste is a brief presentation of the number we are facing too. And the main service is that Poste into the leader with his digital transformation. Poste is on the market since 1862. So it has 160 years of history in delivering a letter and packages. And the second pillar of the business is the presentation of financial services and loans. Even loans for pet, if you are interested, it's a growing market. The more of this Poste used to present, to offer digital services to all the public administration, like speed services. And is formed by a universe of a company of the groups that offer services much more complex like mobile services, delivery and so on. And here you can see the number that we are facing to digital transform those services. We have 8.4 million of financial products sold every year, 135,000 of our employees, 13,000 of postal offices. So a huge, huge, huge number for services. Basically any of these services has an interface for digital access to those services. And all the business line pushed to the IT to transform their services for product evolution and for regulatory compliance as we will see for PST2. How to afford this big change, basically bite after bite. Well, how to afford this change with a huge inertia with all these people and basically an IT that is composed of 2,500 people. Bite after bite. Here you can see the first stack that was used to approach DevOps and microservices architecture. And Paolo, do you wanna get in the day? Yeah, sure. No, simplification applicativa is the first project that we make in DevOps approach. So we have the opportunity to introduce OpenShift for the first time of the IT compartment of POST Italian. The project is an old portal is composed by five old applications based on Java 1.4 and Jboss 4.2 with the end of cycle middleware. And so we have containerized them and brought them in OpenShift in a lift-and-shift mode. We have also made for the first time the pipeline of continuous integration and continuous delivery with Jenkins. So, let's go over. Basically, here you can see the architecture just because it was a lift-and-shift. What's most interesting is the second piece of this story that is a very complex architecture that relies on Kafka and microservices architecture. Paolo, can you go over? Yes, in this architecture we have the opportunity to start from Greenfield, more or less, except from some legacy systems. And so we make all of microsystems, microservices containerized and works on OpenShift. So, we have the opportunity to introduce Kafka. Kafka, the cluster is composed by five brokers and three zookeepers. All works in five virtual machines on Wemware. And we make topics partitioned and replicated. For a study, we have... we make the certification environment. We make the topic partitioned on five... on multiple of the number of the broker. So, for example, for the topic with a large amount of data, we have made the 20 partitions and for the minor topic, only five partitions. In that mode, we have no broker in the idle state as long as the consumers and the producers were scalable in the same way. Okay, just let me add one stuff. Basically, one view aimed to present static user data that are collected from legacy system, pushed to the components that elaborate them and see the data to a MongoDB cluster. To present them has a REST API to consumer. There is an ongoing project that will use the change that I capture from mainframe to stream real-time data and provide in real-time to consumer. This is an ongoing project. If you are asking if it worked, well, here you can find the number. In the first night, with eight hours of work, we was able to ingest 500 millions of records, so compliments to the developers. Okay, going over, we saw that the application was resilient, the infrastructure was resilient, was performant, and so our CIO, Mirco Mischiatti, decided to test it for the core business of post-Italian. He decided to experiment the delivery of a PS2 regulatory compliance on this architecture. The main rails of challenges was to be on time for the data of online and give a response time, offer the API on internet, so we expect to have a huge growth of requests, and we have to do this quickly. Basically, we used the same architecture to go in the core of the business of post-Italian attaching to the payments gateway. This architecture is similar from the last one. We have all of microservices on OpenJDK and Spring Boot, and some interface with legacy systems for payment and to mainframe. The important things of this platform is that we make all entire platforms with three pillars of observability, metrics, logging, and distribute tracing. We have made the guidelines for our developers, and so they can make the applications with the standard open metrics and open tracing. So we can avoid the vendor lock-in. Another important thing is that we make the basis of pull-out the informative from mainframe that represents the 8% of all the records of the mainframe and bring them to Kafka. In that case, we can reduce the load of the mainframe and make the platform scalable and more flexible. So, going over, here you can see the architecture of the replica for disaster recovery. Paolo, do you have something? We make the entire platform disposed in three data centers owned by post-Italian, Roma, Europa, Roma Congressi, and Turin. We make the campus within Roma, Europa, and Roma Congressi thanks to the short distance and the low network latency. And we stretched the Kafka cluster and the OpenShift cluster within the set. In Turin, we replicated the entire cluster with opportunity tools, and we make the synchronous replica from Roma, Europa, and Roma Congressi and a synchronous replica of the data from Roma, Europa, and Turin. In this case, we have the entire platform in active-active mode. The main actors here are Miller-Maker to replicate data between Roma and Turin for Kafka, the arbiter to replicate data between MongoDB clusters and a custom wrapper switch that has the main scope to switch in case of disaster for legacy system. And here, this is the number of what has been done in less than one year. For me, it's impressive. You may know that I entered in post-Italian since just two months, and it was very surprising just to collect some data and understand what really was done in less than one year. We have actually 15 initiatives in the developing state that we land on the infrastructure in the U.S. 13 clusters between Enterprise OpenShift and Open Source OpenShift. With the production of 1,300 cores, 3,000 center, one cloud provider, and that makes developers 350 developers working every day. So it's a very huge number that we have to take in charge. We have to face it when we understand how to change in digital delivery. And basically, I think this was reached by sharing what was the view of the management to the entire IT. How these intersect with StreamCy. You have seen we use Kafka in Bermedal and VMware deployments, but it's very efficient and he can absorb massive cross-up communication. And it's very resilient, but it's expensive to deliver, to scale up and to scale out. So how to simplify this? We are facing some tests with StreamCy to try to test it for some use cases for intra-app communication, asynchronous communication. And I hope we will get fast scale-out and scale-up for asynchronous communication. Tests are currently running, so we hope to show you results at the next comments. And that's it. Thank you for your time. Yeah.