 Okay and I will talk about, it's time for my talk. So I will talk about Norikra. Norikra is an open source software and I just I love and to process and data streams and the open source software written in Ruby. So this and these are topics of today's talk. And at first I will talk about and why and I love Norikra and it is very important to understand what Norikra is and how Norikra works. So my name is Satoshi Tagomori, also known as Tagomoris and that is my account name over on Twitter and GitHub and many others. And I'm from Tokyo, Japan and I'm working in Ryan Corporation. Ryan Corporation is an internet service company and serving as a message application, Ryan. And Ryan is a message application just like on WhatsApp or Facebook Messenger or any other. And Ryan have about 130 million users in worldwide and mainly in Asia and South America and a part of Europe. And moreover, we have many services online platform like in Japanese manga, electric publishments and then camera and news or Q&A service or and weather news and many games. So we must handle a huge amount of logs and metrics and at the same time, and we must handle a various kind of metrics and logs. So and I am working about our data and analytics platforms and this is a very simple monitoring and data analytics platform overview. And at first, and we must and collect many data from many servers and pass in this data and clean up this data and store in distributed storages like in Hadoop HDFest and then and process this data and visualize it in graphs or chats on many others. So and that is why I am a committer of an FluentD project. FluentD project is Kiyoto Tamura talks about FluentD project in two days ago, the first day of this Rubicon. And please check the slides and about FluentD but in roughly saying FluentD is a log management system and to aggregate to correct many logs and to aggregate these logs and to put this data and result data into any storages or remote systems. So and we are using FluentD in our data platform systems and to deliver and many data and to control data flows. On the other hand, and we are using Hadoop and Hype to process our storage and stored data and Hadoop and Hype is a very famous open source software and then many internet service companies and use these softwares. And then and we must and process this stream data as soon as possible to find any troubles or any surprising changes of our internet service traffic and like HTTP response code and percentages or HTTP request rates per seconds or an HTTP response times graph. And these and these graph is generated by FluentD plugins. FluentD have a very... FluentD have an extension and features and the plugin features and we can write and use any plugins to aggregate and stream data and make a percentages or many values and we can put this data on graph tools or any other visualization tools. And so FluentD is very good for simple data or simple calculations but our services and there are many more and more different services and there are many changes in your day including in logging and then there are many kinds of blogs for each services and there are many different metrics for each services. So and FluentD, FluentD requires configuration changes and restart and to change what to do. So and FluentD is not so good for processing about complex data or and fragile environment like in data schema changes or changes what we want to do. So that we want to add or remove and queries anytime we want. So and we want what we want, we will be changing we will be changes and very frequently. So we want to add or remove and queries anytime that we want and we want to write many queries for service logs, log streams or we want to ignore events without data we want. So and data schema will be changes and very frequently. So and but the application engineers cannot know what requires, what data process platform requires. So and they, so we should create a system and that the application and application engineers can change their log schemas and their logs and meanings anytime. So and data analytics platform can or should be able to ignore events without data we want. And there in my company, there are many service directors and growth hackers, they're growth hackers. So they are not software engineers but they know what is important for growth of our services. So we want to make our service directors and growth hackers to write their own queries for what they want. That is why I wrote in Norikra. Norikra is a data processing platform and middleware and to realize these requirements. Okay and Norikra. Norikra is a schema stream processing as middleware and with SQL. That is the open source server software and written in JRuby and runs on JVM. Norikra is distributed in RubyGems.org. So we can install Norikra, just do a gem install Norikra and then and we can launch the server by a Norikra stat. And Norikra having some interfaces like an CLI client or client libraries named as and rubyclient.gem. So we can operate Norikra with these and CLI commands or we can also and control the software and over web UI and over HTTP API with Json and message pack. So let me show some demo of Norikra. Okay and we can install Norikra by this command and already installed. Okay and Norikra is written in JVM. So this and launch requires and very long seconds. So and here there, this is a sample. Okay and this is the event example and the Json objects and within two fields, name and quantity. So we can and feed these data into Norikra over Norikra, client and event send or my service sales. Okay, but by this command and nothing happens. So because Norikra requires okay, this isn't web UI and Norikra requires and target definition. So at first we should define the targets, target opens and now I'm cheating. So and in this shell I'm using and she'll be. So and she'll be does not require and several seconds to launch the commands. And okay, now I will feed log events and continuously into Norikra to Norikra. Okay, so Norikra can find and field names from these event streams, like a name and quantity. So now we want to select these fields from this and event stream, like by this and very simple SQL, my service filter and to specify to print and query results into console. Okay, successfully added and we can get, now we got the results data from results data with a name and quantity from this and event stream. So anytime we can change the input data schema. So now this is the another event example with another additional schema and now I will feed this data to Norikra but the previous query and works well and the previous events. But and schema is already changed. Okay, and drunk Boolean optional fields detected automatically. And now and we can write any SQL to, for example, we can count and any input events by this where drunk. Drunk is and Boolean, so this expression is correct. But this SQL is not correct for Norikra. Norikra requires a range and how long as a Norikra requires an aggregation range and what and okay, with this specification, Norikra can and count their events and for every five events and Norikra can and count the drunk and events with and drunk is true and in every five events. One query is an added and oh, oh, oh, there's too many and record so spend the previous query, okay. We can get an output data, oh, oh, I missed text at, okay. And also we can count and any and quant and sum up quantities by this and very simple query name and summarization of a quantity from my service sales and group by name, order by summarization, descendant. And for and the, get and summarization for every five seconds. Okay, and we can get an event aggregated results by an SQL query for this and input stream. That is how Norikra works. And we can, so with Norikra handles and schemarize events stream and we can add or remove and data fields and whenever we want. And Norikra uses SQL and Norikra requires and normally starts to add or remove queries. And Norikra's SQL with Norikra SQL and joins and sub queries are available and we can add in user defined functions and in written in Ruby or Java or any other JVM languages. And we can publish that UDF as Ruby Jam. And Norikra can handle nested hash and arrays and these values are accessible directly from SQL like this. So user attribute and have, user attribute have a json object, nested json object and attend. Attend is a nested json array, but Norikra's query is extended so we can access user.age or an attend.data zero or any other specifications. Okay, and we are now using Norikra in our production environment and the first end use case is the error log summarizations. And we have an API, web API for our partners. That's over that API and any user sends a message to our partner's official accounts. And then our server sends these messages to our partner's server. And that's in written in business connect server the right side. And then our partners respond their own messages to our API server. And then we bring that response to our users on our application, in our application line. But these and our partner's server, if our partner's server goes down, so if our error messages, if all of our error messages and brings to our partners and that is really and flooding. So to avoid the flooding of error logs or error messages and we are now summarizing these error logs and error messages by this SQL in Norikra. And then Norikra puts the summarized and output and different D sends an email to our partners. And then at the same time, Norikra and saves these results of summarized drugs into MySQL. And then our administration console shows just the summarized drugs for two of our partners. And error log summarization is in very major and use case for Norikra. On the other hand, Norikra, we are using Hadoop and at the same time, in our data processing platform and we can process the same data with Hadoop and with Norikra and our service programmers can write queries on Hadoop and on Norikra for the same and just the same data. So and these features are used to generate and prompt reports and daily fix reports. And for ad services, our programmers and our lighting and these queries on Hive for and fixed reports in part each day and also writes and queries for Norikra for our two producer prompt reports. Prompt reports and generated every one hours or several minutes or several seconds. These and prompt reports and show in administration console for our customers of ad services. And this is a use case by a Google engineer and Google Cloud Platform Service Architect of Google Cloud Platform. And he uses and Norikra with Google BigQuery and to and count the web service requests and web service responses and to show these results on dashboard on Google spreadsheet and Google Apps script. And he uses and the web server engine X and engine X and writes its access log into disk. So friendly these are executed on each server and each reads an access log and sends it to Norikra. And Norikra summarizes these access logs per server and the summarized records are sent to BigQuery directory and another aggregation node. And this is a total overview of the system and the summarized data are collected into an aggregation node and then aggregation node can count the whole status of these events within another Norikra aggregation node. And then and grant D writes these results into Google spreadsheet and Google spreadsheet shows and graphs and summaries and by Google Apps script. If any, if users wants to metrics and not already defined and users can and throw queries into BigQuery. And the Norikra and BigQuery can process at just the same data set. And this architecture is called as an Lambda architecture and called and named by engineer of Twitter. So and the Lambda architecture and handles and batch processing and stream processing. So and we can use an SQL SQL like DSL and both of both in BigQuery and Norikra. So and we can build an Lambda architecture platform with Norikra. Okay. Okay. And the Y and Norikra is written in JLV. So and this is the main the big two are there are big two factor and one is Esper. Esper is an set and complex event processing library written in JLV. And Esper provides an SQL like DSL and that is the base of Norikra query. And Esper has been very and well and written library. So and we can process very huge amount of data with Esper. And rubygem.org and we rubygem.org is and of course an open library repository for public UDF plugins user defined function plugins of Norikra and provided at Rubygem. Okay. And JLV and I don't there before before Norikra I do not use JLV but and these two factors makes me to use JLV. So JLV is JLV for me is just Ruby and this is brought by great JLV developer team. And JLV makes and developing Norikra dramatically faster with an Esper and well-known rubies and positive points. And JLV with and rubygems.rubygems.org for an easy deployment and installation. And JLV and with JLV we can use in Java libraries and like JT or Esper or many other libraries. I am using many other Java libraries in Norikra. That is a very good point to build data processing middleware. But then minus point isn't there are not so many JLV users in especially in Tokyo. So of course and we can find and many Sylvie committers in Tokyo or in Japan but then not so many JLV programmers. We cannot find not so many JLV programmers in Tokyo. So when I got and confused confused confused to and call any Java method or any other processing but then I cannot ask these questions for not so many people. But then this isn't not so big minus point that JLV is very great software I think. So and this is a wrap up and I believe that if you are interested in Norikra and please check your software's documentation site or software you are GitHub. And but then I believe that Norikra brings and our data platform, Norikra brings and more queries and more simplicity and less latency to our data processing platforms. So if you are interested in Norikra please try it. Okay, thank you.