 Okay, next up is Poseidon with Greylock for Java developers. Welcome Poseidon. Thank you for coming. Well, this talk is, I was just trying to introduce you to Greylock, what is Greylock for monitoring and showing metrics from our applications. And we'll see also how to integrate these metrics with our applications in a classical Java project with maybe another and some records. This is the agenda. These are the main points I will see. I will need the introduction to Greylock and how we can install this in the environment for using doping image and doping compo. Later I will comment on the main gridlock tool, what are the components of the gridlock. And in the last two points I will show how we can connect with Java and other services like lasting shares, logstars and so on. Well, that is Greylock. Greylock is one of the main open source log management system tools that we have in the open source ecosystem. It's like it shows many ways it shows the metrics. It has a user interface for showing the metrics of our applications. And the documentation is very good for installing and introducing this kind of existence. What are the main triggers? Greylock is an open source log monitor system capable of handling metrics from different main sources. We can connect with application servers like Westfield, WebLogic, K-Boss. We can connect also with Java applications, Node.js, Python, CSAR. And also we can use the Greylock monitoring system with server like an image or a patch. We have many options to connect with Greylock. For installing this tool we have many options, the classical package for Debian, Google, to Reha, all of those packages. We have also the option for using virtual machines. We have open virtual client images. We can use with VictorBoss. We can also have the background option for installing this system. And we can also integrate with Confi and Panami systems like Che, Pape, and Archibald. And the best option for me, the most simple, for starting using this system is starting the docker. And we have a docker image, a specific docker image from Greylock for starting a very easy way in less than five minutes. We have installed all the configuration. And as we can use docker compose for starting the containers and the image for starting this system. Well, the option that I've commented before for starting, for using Greylock with virtual machines, we have the URL, we have open virtual appliance. This is like an image for using in VictorBoss. And in a nice way we can also configure this system. And for me, the best option for me is, and the most, and the more easy way, is using docker image. With docker hub, we have the mini-bass. The mini-bass for Greylock, basically we're making docker pool from this image. We have a little configuration, changing one IP address or some lights. We have installed the system. These are, for example, if we want to launch in a docker, we need first launch a Mongo container. In the first, in the second, we need an SPSet because rather than recommend architecture, but Greylock needs for running MongoDB and SPSet. We use these two services. We run a Mongo SPSet. And in the third command, we launch the Greylock image, linking with the Mongo and SPSet. In this way, we can communicate to the Greylock containers with the Mongo and SPSet containers with the link flap. We have also an easy way, because you can use docker combos with one only file, one only jamming configuration file. We can install, we can deploy the Greylock environment with, this is a physical configuration jamming file that we can see that we define the Mongo, the SPSet and Greylock services. And the Greylock services, we define also environment variables like the password secret for the Greylock, the endpoint URL, and we define that this Greylock depends on Mongo and SPSet. And we define the post where Greylock is listening the events of the application. And for persisting data, for persisting data in Greylock, the best configuration is use docker columns for storing information in Mongo and SPSet. And in this way, we define docker ones for persisting this information. This is an example of execution that we can see that we see the docker volume of relation in the first script count tool, and in the second script count tool, we can see that we have the three images, the three containers executing each one in an independent way. We have the Mongo service, the last service and Greylock service. And Greylock service, for example, is listening in the PORS TCP and UDP. This PORS, we will use this PORS for communicating data with our application in Java. Well, the main Greylock features, Greylock provides, receives my messages for multiple input protocols, the UDP, TCP, Syslog, Pace Kafka. An interesting feature is that we can assign when we receive a message in Greylock, we can also make a specific filter over these messages with the strings. And then we will see how we can manage the strings. And it provides basically a source message in the last three sets for graphing and MongoDB for storing metadata and alerts. And an interesting feature also is that it provides shares and graphing capabilities for source messages. From a graphical point of view, these are the options that provide Greylock with a string that is a way of filtering the message, of putting messages in categories including the message that we receive, alerts, network searches, security, or managing with users and permissions. These are some script counters we can analyze and search the message that we receive for other applications of service. We have alerts, triggers, metrics, users management, and we can connect from various inputs and outputs. We can connect from TCP, UDP, we can connect also with RAVI, MQ, and Pace Kafka, for example. We have plugins for connecting with these kind of services. The main storage that has Greylock is a last set. All information, all messages are stored in a last set index. Basically, we have storage for storing this information. This is a script counter for what we consider the optimization for a last set index. As I commented before, we have, for starting working with Greylock, what we have to define first is define the inputs. The inputs, basically, are the challenges for communicating with Greylock from the other application or service for example. If we want to send a message by a TCP protocol or UDP, we have to define an input where we define the port, what is missing, and the address, what is missing in the service. The stream filter that I commented before is a way of filtering the message that receives the Greylock interface. We can filter in real time, when we are working in real time with all the applications, in real time we can filter that message, that message that we receive in real time, we can filter information with conditions that we can define. We can define actual values for sending notifications, between criterias, and analyze and configure these values. This is the main script for expecting a stream, an easy stream configuration for editing the rules, manage the outputs, others, and start the stream. This is the script output that provides Greylock for defining when we create a stream rule, when we create a stream with the fields we have to do is creating a rule. A rule, basically, is a condition. A condition, for example, we want to filter by a specific field or property, we define a rule for this kind of message. Depending on what is the information that we want to filter, we need to define different rules. We have different rules for the same streams. I'm filtering this stream filter information by many conditions, depending on what is the value. For filtering this information, Greylock, for example, uses internally for filtering the stream message, is using these Java classes that are matches, that basically are the classes that are using Greylock internally for extracting the information using regular expressions, pattern matches, this kind of information. When we want, this is another script output for Greylock, we can define values from the version from the stream. Well, I'm going to now the Greylock architecture. The Greylock architecture, the main Greylock, we have a basic Greylock architecture, so we can, we have the Greylock server for processing all the messages from many resources. We have also a specific cluster, can be one machine or many machines can define a specific cluster. And in the BongoDB, we can use also one machine or we can also have a replica for within a cluster of MongoDB. The classical configuration, the most easy configuration that we can do with Greylock is have one plastic search, another for storing all the information wrapped with messages from MongoDB that stores the user and the configuration, basically metadata and this configuration, and the Greylock server that provides a recipe for communication with, we have the Greylock server and the Greylock web interface. The Greylock web interface is the web interface that we have seen before and the Greylock server communicate with the web interface through a race API. And for communication with, for receiving messages from devices or service or other applications, it has many ways of message inputs. We can go more, more, a little more configuration with configuring a cluster. We can have a Greylock cluster of servers, we can have a last set cluster and we can have a way to label a replica set. And in this way we have, we have the main difference with the last, with the previous version, is that in this case we have a lower balance set. The lower balance set what it does is distribute the different requests that the system receives and it tries to distribute the requests between all the Greylock servers that we have in our, in our architecture. Well, how we can, how we can connect with Java? With Java, basically we have, well, the first of all is that for sending log data to Greylock we have many ways. We have the classical siflo, the classical siflo, we can use the CD, the Unity, the AMQB, the Kafka, the GEL, that this is the option that we are going to see in the case. And we have other options like collector or sending the information in row or plain text. The option that we are going to see now is the GEL. The GEL is the Greylock extended, extended log format that use Greylock for storing the metadata as seen with the message that the server receives. Well, the GEL is the product of Greylock extended log format. Basically we can use that format of extended in the log ecosystem. And it's very useful to use it with logstash, Unity, Docker. And this format is passed in siflo and fsiflo. But it provides a better format, a better configuration and more user configuration. Basically it's a JSON-based format for sending structure data. The main problem that we have with siflo is the siflo, siflo, it has limitations with the message length, the timeout or quantity of problems. And the GELs try to solve these problems with the main structure adding metadata to the message that the platform receives. Basically when we are communicating, this is the main for the main structure where we have working with GEL documents. Basically we have a JSON document where we put all the things that our system, JSON, is monitoring. We can monitor the timestamp, hold the server, we can send also short and full messages, the level of the log, and some environment variables that in a specific moment with our execution can need to communicate with for showing the metrics. In this example, we can see the... This is an example of RILO that in RILO we have the message inspector where we can see that this is the kind of structure. We have a full message, the level, the shows, timestamp, this is a message that we have sent from an instance to an external application. Well, for starting with log in Java, we have... In the time, we have the classical log for J, for log in 7 in Java. We have also the simple log in archive for Java. These are the projects, the first project that are very useful in Java. And these libraries are very useful for logging and register metrics of our application. Also, we have another project, another interesting project. This is the local project. This is a project that's faster in log for J project, but this project provides a faster implementation of log for J and performance 10 faster than log for J. And with this library, in a very simple way, we can connect RILO with our application with Java. Basically, this is the Java we have. We can obtain this Java with mail in, for example. Basically, we can connect with RILO the gel for J and the log for J. These are the last versions of the libraries and we can find in modern repositories, for example. Basically, the gel for J is an implementation in Java and no for J appended without any dependence. We can find this project library and integrate it in a nice way with a classical project in Java E with a main architecture. We try to avoid the change in the job. We can see this configuration. We have a log package where we have the appenders and the class for checking the connection with the RILO server. And in the main package, we have classes related with the senders. These are the classes for sending information to RILO server. Depending what is the protocol, if we see the classes, we have senders for UDP. We have senders for TCP. And we have senders for AMTQP. Depending what is the protocol that we... Another configuration that we can do is configure the appender. The appender provides the main configuration that we can use for connecting with RILO. We define the RILO force, the port, the level of the boot level that we want, and so on. In a very easy way, we can create a class. This can be a RILO interface when we define an AMT sender depending what is the protocol we want to use. We can use UDP, TCP, or the other. We can also have a sender message for what is the method of sending the message to the RILO server. And the same message where we define, for example, a message with the title and description. And the information that we got in this moment is that our application is the valid force. This is the operation that we can do in a classical Java project. For example, we use Spring. We can create a spring beam for injecting in an easy way the server and the port and the boot level. And in the main application, we obtain the link with the application context class. And we use the method that we have seen before, send message or force sending the message to the RILO server. And in a easy way, we can send the message in a specific moment or in a specific use case. And our application is send this monitoring. Another option that we have is using the law path. The law path is compatible with JDK. It was for Major 1.7. Basically, we have to add the following dependence to our project. We are defining the magic dependency. And as before, the delivery is the same. But in a different package, we can use the same classes for sending messages. We can also configure the law path appender for sending, for example, this configuration. This is another example that we can use for connecting with RILO with a red interface using the resting plates. Well, this is another way of connecting with RILO via HTTP input. We can also connect with other service, like Lastic Serves, Rostars and Kibana. For connecting, for example, with Lastic Serves, we can use RabiDemiQ, the robot, the message broker RabiDemiQ. And in RILO Marketplace, we have a lot of plugins for connecting with other service. We can connect with AMQP protocol for sending seamless messages to RILO. And also, we can use, we can connect with Abachi Kafka. Abachi Kafka is a fully-signed distributed system subscribed for sending messages, configuring topics, and so on. And that's all. These are the references, basically, the official documentation and the repositories where we can find all the configuration and the short call of this project. And that's all. Thank you. We took a whole half an hour for the talk, so there's no Q&A, but I guess it will be around if you have any questions, talk to me directly. So thank you.