 Greetings, I'm David Schexner, Global Partner Solutions Architect at CouchBase. The day I would like to talk about and demonstrate a customer 360 use case. Capturing all of your customer interactions to build a single view of that customer is challenging, especially with legacy technologies. The CouchBase data platform gives you the flexibility to aggregate all forms of data, delivering a single omnichannel view and giving both customers and customer-facing employees fast access for a seamless and more personalized experience. So there are four main components of a customer 360 solution, user profile. User profile must contain attributes that cover the most important interactions between you and them and all of their life stages with agility, flexibility and performances at scale. Sessions store where you manage critical online session data in real time so you can monitor user stats, maintain security, track behavior, place ads, provide access to content and more. Recommendations where you query, aggregate and provide personalized recommendations based on previous activity from the customer. Interaction history. Finally, we have captured interaction history or the steps that led to a purchase or action. When we look at the reference architecture, we see the customer 360 system which consists of the CouchBase data platform and microservices, which interact with the platform loading data and interacting with CouchBase. To the left, we may have several data sources which may include a mainframe, data warehouse, web clicks or social media. We may also have downstream systems such as a data lake or data warehouse. In the web tier, we have the CouchBase Sync Gateway. The Sync Gateway is the synchronization server and a couch base for mobile and edge deployment. It is designed to provide data synchronization for large scale interactive web, mobile and IoT applications. There are also consumer web applications. So let's take a look at the demo solution. We have an OpenShift cluster with a single namespace or project. Within the customer 360 namespace, we have the CouchBase Autonomous Operator which is used to deploy, maintain and manage the CouchBase cluster. The CouchBase cluster consists of three server pods each running the data index and query services. Data is stored in using persistent volumes. All the way to the right, we have a MySQL relational database pre-populated with a database and some tables. To connect CouchBase and MySQL, I use the Red Hat Integration Operator to install the AMQ Streams Operator. The AMQ Streams Operator is used to deploy and manage Kafka and ZooKeeper as well as Kafka Connect. Kafka Connect is deployed with the Devesi and MySQL Change Data Capture connector as a source and the CouchBase Kafka Connector as the sync. The demo was performed on a bare metal OpenShift 4.6 cluster. We have the CouchBase Operator 210 installed via Operator Hub along with the CouchBase Server Enterprise 661. The MySQL image used is the Devesi image which is pre-populated with an example database. The Red Hat Integration Operator was also installed via Operator Hub and is used to connect, used to install the AMQ Streams 1.63 Operator. Finally, we have the Kafka 2.60 and Kafka Connect. The Kafka Connect container image was built by adding the Devesi and MySQL connector 142 and the CouchBase Kafka Connector 406. Now let's get to our demo. So for this demo, we will show automated data ingestion from one of potentially many data sources into CouchBase. We have an eight node OpenShift cluster running on bare metal VMs. Let's take a look at our operators. Both the Red Hat Integration and CouchBase Operator were installed at the latest via Operator Hub. Let's look at the CouchBase Operator real quick. We look under All Instances. We can see that we have a CouchBase cluster named CB-Example as well as a bucket named Staging. So we use the Red Hat Integration Operator in order to install the Red Hat Integration AMQ Streams Operator. If we look at the AMQ Streams Operator, we go under All Instances. You can see that we have a Kafka cluster named MyCluster. We see warnings because the Kafka cluster is not replicated. But if we look at the Resources tab, we see that the AMQ Streams Operator has created all of the server and client certificates as well as the Entity Operator. Now let's take a look at our CouchBase cluster. As you can see that we have a three node cluster each running the data query and index services and we have our empty bucket named Staging. Now let's to play the Kafka Connect instance. The AMQ Streams Operator does allow you to pull your own image. I built an image based on the AMQ Streams Kafka 2.6 REL 7.160 base image and included the Debezium MySQL connector 1.4.2 and the CouchBase Kafka connector 4.0.6 and the named it CouchBezium. Pull the source YAML and the CouchBezium image is hosted up in a public Docker Hub repository. So while this is creating, let's take a look at our source dataset. And for this we're just gonna remote into the MySQL pod and issue the MySQL command and use the password for inventory. Okay, let's do show tables. You can see that we have a few tables in our inventory database that addresses customers, orders, products. So we're mainly concerned with the customers table. So you can see this, we only have a small dataset. We have an ID, first name, last name and email and four total entries at this point. Let's go back here and check on our connect cluster. You can see that the resources are being created. You go back to your streams and look under all instances. You can see that we have a lot of topics being auto created via from Kafka Connect. Once we get this to ready, then we should be able to register our connectors. So before we do that, let's go ahead and create a route so we can register our connectors via a REST interface. We go create route, select the API, report, we create this and we can click through and see that we see that the version of Connect is running properly. The 260 Red Hat 4 and then gives us our Kafka cluster ID. So if we also look at this same route and go to connectors, you can see that we do not have any connectors registered yet. So the first one we're gonna register is our source. By registering the Debasey and MySQL connector, we'll start monitoring the MySQL database servers bin log. The bin log reports all of the databases transactions, such as changes to individual rows and changes to the schemas. When a row in the database changes, Debasey and we'll generate an event. Okay, we're just going to post this on local workstation. You can see, let's go check here and we should see our inventory connector is registered. Also, we want to pay attention under all instances here. Once again, you can see that we're creating, auto creating a whole bunch of topics. We call the MySQL instance DB server, but then we have the inventory database and then all of the tables that are in there and each of these created as topics. So our next is to configure the couch base, register the couch base. And this is basically going to read from the DB server inventory customers topics and it converts the keys into strings as well as the values into JSON and the data will be loaded into our staging bucket. Okay, let's make sure that we're registered. Lovely. And so very soon afterwards, we should start seeing data populating into the staging bucket, which we do. Okay, once we have data into our staging bucket, let's take a little closer at look at the data set. And you can see, we have a whole bunch of information that we might not possibly need here, but this might be good for data lineage. We scroll down, we'll see that there's also some schema information in here, but we look at our payload here. And so we can see the before would be null and then the after is going to give us our first name, last name and the ID. So we can also query the data. In order to query data in couch base, you're going to need to at least have a primary index. So let's go ahead and do that. So let's go ahead and view our records. And we're just traversing the JSON with our nickel command. We can also look at this in the table view and you can see that we're pretty much seeing the exact same data set that we had in MySQL. So the next thing that we want to do is alter a record and see how that's affecting our data. So in order to do that, let's go back to our MySQL interface. So we're just going to update it. We're going to change the first name from Ann to Ann Marie. Okay, let's see how that's affected our data here. Yes, changes is propagated immediately. So the last thing that we want to do, let's go ahead and insert a whole bunch of values and they should be propagated pretty much immediately. And yes, we do have that. So with that, that was the demo that I wanted to show you. If you're interested in additional resources and examples, we have our couch base customer 360 solution page. We do have previous versions of this with the customer 360 blog as well as a tutorial, previous versions of couch base and connectors. The Divisium OpenShift example follows along very closely with what I've just shown you, as well as the Red Hat integration documentation on how to build your Kafka Connect interface. And with that, I thank you very much.