 Hello. In this video, we're going to show you how to add full text search to a Quarkus application thanks to the power of High Minute Search and Elastic Search, recently added in Red Hat Build of Quarkus 2.7. For the purpose of this demonstration, we've developed a small application to manage a product catalog at a fictional company called Acme. The application relies on High Minute ORM with Panache to implement simple create, retrieve, update, delete or crud features. We won't cover how to create a Quarkus application using High Minute ORM with Panache here, but you can find more information in the Quarkus guides at quarkus.io. What you can see here is the list of products. Each product belongs to a department, has a name and a description. And it comes in multiple variants, each with its own name and price. The application does already offer search capabilities. For example, we can search for Android and get all products that include this string. Then we can filter by department to only show books. However, the text search is based on standard SQL only, which means it's simply substring search, which is quite limited. Sometimes, substring search yields irrelevant products. For example, if we search for duct tape, we get a book about alien abductions in the results, even though it's not really what we're looking for. That's because duct is a substring of abduction. Substring search may also emit relevant products. For example, if I search for abducted, the book we saw just before doesn't show up. That's because abducted is not a substring of abduction and our search isn't able to handle that kind of fine detail. Fortunately, Quarkus offers an extension meant to address precisely these limitations by providing an integration with Hybernate Search and Elastic Search. Elastic Search is a distributed RESTful search engine. It is quite popular and addresses a large number of search use cases with impressive performance. In our case, we will be running Elastic Search in a container. Hybernate Search is a library that plugs into Hybernate ORM to listen to entity updates events so it can automatically index entities as Elastic Search documents. Hybernate Search also offers many tools beyond this core feature for managing the Elastic Search schema to re-indexing a whole database to performing Elastic Search queries through a dedicated Java API. We can add Hybernate Search to a Quarkus application through a simple Maven command. Now that the extension was added to an ORM Maven palm, we can start configuring Hybernate Search. And in fact, this will be quite short. Hybernate Search just needs to know which version of Elastic Search is going to work with. Note that you can also configure Hybernate Search to target a specific version of Open Search, a fork of Elastic Search. See the Hybernate Search guide in the Quarkus documentation for more information. This configuration is enough for Hybernate Search to start, but it won't do anything until we mapped our entities to an index. Similarly to JPA mapping, this is done through annotations on the entity model. First, we add an indexed annotation on our product entity. This will let Hybernate Search know that this entity needs to be indexed into Elastic Search. Next, we annotate each field that needs to be included in the Elastic Search document. The many available annotations will go through the more important ones here. Name and description will be full text fields. We need to search for words in these text fields. We also need to sort our search results on the name field. But text fields cannot be sorted unreliably due to how they are indexed in Elastic Search. So we'll add another annotation on the name this time for keyword field. We'll give that field a different name to avoid any conflict with the full text field. And we'll mark the field as sortable. Next, the department. That's an enum and the UI allows filtering through a select box instead of text input. So we'll simply make the department a keyword field. Now the variants. Obviously, there are multiple variants per product. Each variant is itself an entity with multiple fields. We want to be able to search on a variant name, so we will add a full text field here. However, this annotation was added to a separate entity. In order to get product variants embedded into our product index, we'll add the indexed embedded annotation on the variants list. Then every time Hamelin Search will build an Elastic Search document for a product, that document will include a variants field with an array of objects representing the variants, each with a variant name in particular. Now, Hamelin Search has all the information it needs to perform an indexing. Let's have a look at our rest endpoint. We don't have to alter the create, update, and delete methods, because Hamelin Search will automatically trigger re-indexing when necessary, based on our entity changes in Hamelin ORM. It's all implicit. The search method has to change, however. It currently performs the search using Hamelin ORM with Benach. We'll replace that with Hamelin Search. First, we'll inject the search session, which is the entry point to search operations, similar to Hamelin ORM's session, or JPA's entity manager. Then we'll go and implement the search method. Let's start a search on the product entity, expecting to get product instances as a result. Then we define a ware close with a lambda. Here, the top-level predicate will be a Boolean predicate, which is a way to combine multiple other predicates, think and ORM operators, but more powerful. When no search criteria are set, we want our product list to show ORM products. So we add a match ORL predicate. Otherwise, the Boolean predicate without any close would return no results. Then, if the user enters text, we use a simple query string predicate to implement full-text search. We just have to specify the elastic search fields we want to search on. The simple query string syntax provides users with multiple operators, more on that in the reference documentation. We'll make sure to use end as the default operator so that the query will match all provided terms in the input text by default, instead of at least one. This generally feels more natural. Then we'll implement the department filter using a simple match predicate. We will then sort search results by product name. This is where the sortable keyword field we added earlier comes into play. And finally, we'll fetch the requested page of search results. The remainder of this implementation is not really relevant, as it deals with mapping results to a data transfer object, or DTO. It's suitable for JSON serialization. Next, we will handle initialization. Having a search will automatically create the elastic search indexes and their mapping on starter, or validate them if they already exist. But in dev mode, it's more convenient to just drop and recreate indexes on each restart to keep up with the changes in the code. So we'll tell HavingSearch to do just that. Also, our demonstration database comes prepopulated with a test dataset. So we need to re-index all that pre-existing data. We add another HavingSearch entry point, this time one that doesn't rely on a particular session. For convenience, we'll re-index on each application startup. Obviously, in a real-world application, we would only need to fully re-index very rarely, basically when the application is first deployed, and in some cases when we change the elastic search schema. But in development mode, it's more convenient to re-index on each restart. And we're done. Now, let's start our application and have a look. Progress is starting containers for PostgreSQL. And here we are. Our user interface is exactly the same, but it's now backed by HavingSearch and Elasticsearch, which means we can benefit from its performance, scalability, and perhaps more importantly, its features. The data gets synchronized with Elasticsearch automatically. For example, let's change the name of our variant on our PowerDrill. We'll add a UK PowerAdapter to this variant. The name of the variant was updated in database, but also in Elasticsearch without us having to do anything. For proof, if we search for PowerAdapter, then the results returned by Elasticsearch to our application will indeed include our modified product. Now, back to the product we were, to the problem we were having with substring search. If we search for duct tape again, we get the result we want and not only that one. The book about alien abduction no longer matches because Elasticsearch works on words, not on substrings. We didn't solve the problem we had when searching for abducted, however. We still don't get a match on a product because the description contains the word abduction. However, now that we are using Elasticsearch, this can get solved very easily. We said earlier that Elasticsearch was working with words instead of substrings. Those words are called tokens, and the process of turning a string into a sequence of normalized tokens is called text analysis. The real power of Elasticsearch is that we can customize text analysis. There are plenty of analysis components to choose from, each with its own particular behavior. In Hymenetsearch, a configuring analysis requires implementing a custom analysis configurer. The configurer extends an interface specific to Hymenetsearch. Since our problem has to do with word radicals, we will define a token filter to remove meaningless radicals, such as ED in abducted or ION in abduction. We only need to pick the existing filter called stemmer, configure it for our language, and name this configuration. Then we'll make sure to use that component in the default analyzer, along with other components. A standard tokenizer, because we don't need anything fancy when splitting text into words. And finally, filters, to turn the token into our case, remove radicals, and turn characters into ASCII. Then we need to tell Hymenetsearch to use this configurer. There are multiple ways to do that, but we will simply make the configurer a named CDI bean. And then we reference that bean from the configuration properties. Now, let's get back to our application and search for abducted again. That's it, we solved our problem. And more importantly, we are now in a position to tune our text search much more finely, as well as add more features to around that search. For example, here's another version of the application where we added facetting. We won't have time to go into the details of the implementation, but as you can see, the panel on the left now displays facets with a number of matching products in each department and price range. The numbers get updated when users add more search criteria. And users can now click these facets to drill down into search results. That's it for now. Elasticsearch offers many features, and Hymenetsearch exposes the most important ones through very simple Java APIs. Be sure to have a look at the extensive reference documentation to make the most of it. We have also added the GitHub link to this demo. Try it out.