 Hi there! In this video, I'm going to show you how to get started with OpenShift Service Registry. OpenShift Service Registry can be found by logging into console.redhat.com. If you don't have an account, you can just head over to console.redhat.com and create an account on the sign-up screen. Once you've logged in, find the application and data services. From here you can see the various application services, and today we're going to focus on Service Registry. So I'll click on Service Registry, and then click on the Service Registry instances. I have no Service Registry instance at the moment, but to create one, all I have to do is click the button here. I can give it a name, such as Evan Service Registry, and then click Create. The creation process usually takes less than a minute, and once it's finished, you'll have a fully functional Service Registry available. So as you can see, our Service Registry instance is ready. I'll close this message, and then click on the detail. Let's drop down here and select Connection. From here we can find the Registry APIs, and you can see we have the Schema Registry compatibility layer, CNCF Schema Registry API, and the Core Apochuria Registry API available. So it's compatible with various clients, and we can also manage authentication using OAuth or HTTP Basic. And of course, if you're familiar with application and data services or application services already, you'll know that you'll need to create service accounts, which will provide a client ID and client secret that can be used as your username and password to connect to these services. So I'll click on the name of the Service Registry instance here, and you can see from here we have the UI. We can upload artifacts, manage global rules, manage access settings, and other settings. Now if I want, I can click on Upload Artifact, and I can enter all the details manually here, or I can go back to my IDE where I already have an artifact that I'm working on named Song. It contains two strings, a title, and an artist. So I'll copy that, Avaroschema, paste it in here, and since I've set the type to auto-detect, when I click Upload, the Service Registry will automatically recognize that it's an Avaroschema and import it. We can then manage content rules such as validity rules and also compatibility rules such as backwards compatibility. Now that I've uploaded a schema, I can return to the homepage here and see the Song schema is listed in the Artifact list. Now today what I'm going to show you is how to connect a Java application written using Quarkas to your Service Registry. You can follow along if you like by going to Learning Resources on the left here and finding the Service Registry and Quarkas example. Now to follow along, or as part of following along, I've kind of taken some liberties here and taken care of some prerequisites. So the first one is that I have a service account already defined here named Demo User. I also have a Streams for Apache Kafka instance over here named Demo Kafka. And if I click on that, you'll also see under the Topics list that I have a topic named Quotes. And I also have configured some access rules, say my Demo User can produce quotes into that topic. So one other thing I need to do is go back to my Service Registry, click on it, navigate to the Access section and grant access to my user or my service account. So I'll click on the dropdown, select my Demo User and make sure it has manager permissions so we can both read and write artifacts on my Service Registry instance. Now that I've done that, I'll return to the application in my IDE. The application contains a quote producer and this quote producer writes quotes to the quotes channel every five seconds. A quote is a simple object that contains a string and an integer. This is defined as an Avro schema here, similar to the song schema I showed you earlier, except it contains an integer and a string. If we look at the application.properties, we can see that the application is configured to use an Avro Kafka serializer for the outgoing values. So that means our quotes will be serialized in Avro format and the outgoing keys are serialized as strings. We also have Service Registry specific configurations here such as the artifact resolver strategy and Avro data provider. And finally, there are some connection variables listed down here that will tell our application how to connect to our Service Registry instance and our Kafka instance. So I better set those variables before I run this application. To do that, I've created a .env file and it contains the variables here. Now you can see I've filled out a few of these already to save some time. And one thing to note is that while my client ID and client secret are in plain text here, normally you don't want to share those with someone. But since this is a demo, I'm just letting them be seeing right here, but I will delete them afterwards. So the two variables we need to fill in are the Kafka host and the Service Registry URL. I can obtain those by going back to console.redhat.com, returning to the Service Registry instances list, selecting the connection information for my Service Registry and copying the core registry API. I'll replace the Service Registry URL value here and make sure to trim the path off because in this particular example, the path is in a separate variable. Now we'll get my Kafka host by returning to console.redhat.com, navigating to the Streams for Apache Kafka setting section, selecting the connection details for my Kafka instance and copying the bootstrap server. Paste that value into the file and I will now source the .env file and those variables will be available in my terminal session. At this point, I have all the variables defined. I can change directory into the producer application and I can run it. When I run this application, what should happen is the application starts up and every five seconds it will produce a quote into our Kafka topic. That quote will be serialized in Avro format using the quote schema and that quote schema should be uploaded to the Service Registry instance so that downstream consumers of the quotes will be able to deserialize them and perform any operations they need to on the quote data. So let's give that a go. So I'll run Maven Quarkus Dev which will start the Quarkus application in Dev mode and if all goes well, we'll have some quotes in our Kafka instance and a quote schema. Once the application starts up, you can see it notes that it has connected to the Kafka broker and not much other or not much more output is shown because this application simply just does what it needs to do and writes a quote every five seconds. So let's confirm it's working. I'll return to the console.redhat.com UI and return to my Service Registry instance. And there it is. You can see my quote schema was automatically uploaded by my application. So if we click it, we can see that it's an Avro schema and we can even see the content is as expected. It contains the price which is an integer and the ID which is a string. And now we know that our application has successfully uploaded its schema to the Service Registry and it is producing records into our Kafka instance. That concludes this demo.