 In this video, I will walk you through a modular reusable pattern that we have developed as a part of data engineering jumpstart library. This pattern is about moving data from Kafka to object storage. Kafka is a distributed event stream processing system which is great for storing hot relevant data. Generally, data is stored in Kafka for a short duration. But data storage time period could be increased using retention policy. However, Kafka is not suitable for storing data for a long time, for example, several months or several years, etc. This is where the architectural pattern to move data from Kafka to persistent storage comes to the rescue. And for all practical reasons, object storage is our best bet. This pattern implements Secure, an open source project which acts as a Kafka consumer that consumes Kafka messages from Kafka topic and store that to S3 compatible OpenShift Data Foundation object storage buckets. Let us now see how this pattern is implemented. We already have an Kafka producer continuously storing data on Kafka topic called topic one. We can also see this from Kafka drop dashboard by browsing to topic one and checking the messages. As you can see, the messages are continuously flowing into topic one. Next, in order to move data from Kafka to object storage, first we need an object storage bucket to dump data to. For this, we will create an object bucket claim on ODF, formerly known as OCS. You will verify the object bucket claim ML file. Make sure while creating object bucket claim, you're selecting the right storage class name. In my case, it is OCS Storage Cluster self-reduced gateway storage class. We will now apply this ML file. Next, Secure needs access to ZooKeeper to fetch message offsets. By default, ZooKeeper cluster deployed by the StreamZ operator does not allow access to ZooKeeper. Hence, we need ZooKeeper entrance that deploys external proxy so that Secure can establish connection with ZooKeeper cluster. You will verify this ML file. This ML file will deploy a deployment, create a service and some network policies. We'll go ahead and apply this ML file. Finally, we will create a Secure deployment with some environment variables pre-configured. For details about each Secure variable, please refer to Secure official GitHub repository. Before applying, we will check the contents of the ML file. As you can see, this is a Secure deployment and there are several environment variables that we have pre-configured here, like cluster name, access key and secret keys and bucket name which will get picked up using the Secrets, OpenShift Secrets and few more settings related to Secure. We'll go ahead and apply this ML file. Let's wait for Secure deployment to be completed. We will head back to OpenShift console to verify the status of the Secure service by searching this odd name. Go into the logs and validate. In the logs, you can clearly see that Secure has started to move data from Kafka topic onto S3 object storage bucket as new data enters into the system. So data is being moved to this bucket that we have configured in the Secure ML configuration. Let us also verify this from the CLI. Using OBC secret on config map, we will export access credentials and bucket names to access object storage. I will set AWS access key, secret access key and bucket name in my environment variable first. Next, to verify that Kafka messages are getting stored on ODF object storage bucket by Secure service, we will use S3 CMD to list the bucket contents. As you can see, the messages are getting partitioned by date. I will look into this subdirectory. And here is how multiple Kafka messages are getting collaged into single object and have been successfully stored in object storage using Secure service. You could now use big data processing engines like MapReduce, Spark or Presto to read data from ODF S3 object storage and perform real-time or batch analytics depends on your use case. Thanks for watching.