Sanxoc coming.I'm Masa, the friendly developer of the FUJIDATA.In this session, I talk about the friendly basis for the new friendly users.So, I talk about the architecture overview and the practice and the use cases with Kubernetes.First is the friendly overview.This is the list of the friendly.Friendly is designed for the storming data collection.Friendly is continuing to lead from the data and send the data to the destination in the storming manner,unlike a batch model.Friendly focuses on the low latency log transfer.Friendly is using the RubyGems for the plugin development and distribution.RubyGems is a standard platform on the Ruby.Friendly uses the whole Ruby ecosystem for the plugin development.We can use any Ruby library to develop the plugin and easy to distribute in the RubyGems.We can easy to write the plugin and easy to reuse the plugin on your environment.Friendly provides several approaches for the setup.You can use the RPM on Wadeb or the CentOS or Ubuntu or Debian.Friendly also provides MSI packages for the windows.You can run friendly on the Linux and the windows.Friendly is a logging part of the CNCF.Friendly is used in Kubernetes.Friendly is also a CNCF product.For example,Friendly has a program service to export the friendly metrics to the product.You can monitor the friendly instances using the program service.This is one example of the friendly's log collection.This example is for the local files.Friendly monitors the local files.If application writes the logs to the local files,Friendly leads the data immediately and sends the logs to the central server with low latency.You can check the logs on the local server quickly.This is good for the data log processing because you can use the Spark or other data processing framework with new logs.You don't need to wait for the log rotation or the file to complete.Friendly has a plugin.You can use the plugin for any destination and any data sources.You can give the data from any data sources.You can send the data to any destination.No need to write your own script for your needs.Friendly is a group in your data pipeline.The next topic is friendly architecture.I will show how friendly works.This is the details of the friendly design.Friendly consists of the core parts and the plugin parts.Friendly's core covers the common concerns of the data collection.Friendly Core provides the buffering and retrying feature,the error handling for the temporal failure or the unrecoverable errors.Of course,Friendly has the event routing for the data destination.Friendly has more features for the parallelism for the data improvement.Friendly Core also provides the collection of the plugin helper for plugin development.The plugin parts is the interface of the real use cases.Friendly Core provides a lot of good features for the data collection.Friendly Core focuses on the logic of the real use cases for write to the databases or read the data from the HCCP API or how to pass the data or how to format the data.APR is very simple.Developer easy to write the new plugin for your requirements.The next is the point is the event structure.For example, Apache logs.If you have Apache logs,one Apache log line is converted into the disk data structure.Time and the tag are the records.The tag is the identity from the data sources.This tag is used for the event routing in the Friendly's pipeline.The important point is the record.Record is actual log content,but the format is JSON object, not the law string.Because recently, many middlewares or data services can accept the JSON format natively.JSON object is easy to mutate or easy to format to the other format.So instead of law string,JSON is very good for the recent systems.This is why Friendly uses JSON object,not law string.This image shows the data pipeline in the Friendly instances.Incoming events pass through this right to leftインプトプラグイン、フィッタープラグイン、バフアプラグイン、アトプタルインI talk about each component.インプトプラグイン.This is the third point of the Friendly data pipeline.インプトプラグイン receives the data from the HTTP or the data from the local files.This is the data to this pipeline.The important point is this HTTP plugin passes the data for the structured logging.I said about this event.インプトプラグイン converts the law string or binary data to this format.Next is the filter plugin.Filter plugin is very simple.Mutating the records or filter out logs.For example, adding the host name to the event record or filter out the unnecessary logs with conditions.For example, info level or error level or something.Of course, the filter plugin could be changed so you can apply the multiple filters as one data stream.Next is the buffer plugin.The buffer plugin is not another plugin.This plugin is used inside the output plugin.The buffer plugin is used for the stored eventsbefore data flash.This is important for the improvement of the performance.For example, some output plugin is not optimized for the small request for the MongoDB or Elasticsearch.It needs a bulk import for the data ingestion.The buffer plugin is collecting the data, merging the small records into one large chunk.The output plugin uses these chunks for the data flash.This is why buffer plugin is important for the front-link.Front-link provides a buffer plugin by default.This buffer file plugin stores the data on the persistent disk.This avoids the data loss when the front-link crashes.We recommend to use the file buffer on the production environment.The output plugin is very simplebecause buffer plugins manage the data streamsand how to manage the data chunks.The output plugin is very simpleit writes the data to the destination, so MongoDB, Hadoop, or other cloud services.This is a real example of the input plugin.Fentany can conceive the data from the Kafkaor can create the data from the HTTP endpoints.The output plugin has a lot of output pluginsbecause of the destination, so MongoDB, Prometheus, Elasticsearch, Kafka, or more.The next is the use cases with the configuration.I will show the use casesand the actual configuration for how to use.This is very simple coding configuration use cases.This is collecting data from the local filesand receiving data from the applicationand send data to the MongoDB.This is a configuration example.Source section is for the input pluginand the match section is for the output plugin.In this case, we use the tail input pluginand the whole input plugin.Match section has a tag pattern.This is an up prefix tag pattern.In this case, if tag matches this pattern,these events are stored into the match section.In this case, tail input plugin's datais stored into the MongoDB.You can specify a complex data pattern.You can build a complex data routinewith tag matching.Next is the match destination.Sometimes, we want to send the datato the multi-data stores.We also support this use cases.For this use cases, we use the copy plugin.We can wrap the output pluginswith the copy plugin.In this case,if this configurationup tag prefix pattern's datais stored into the ASK searchand the HDFS is at the same time.If you want to add more destinations,you can just add the store sectionsin the copy plugins.Friendly also supports the match tier forwarding.This is mainly for the high-traffic environment.You have lots of forwardersand the data traffic is high.Friendly has its own forwarding protocolwith the forward plugins.Friendly's forward protocol supportstwo delivery semantics,Atom Store ones and Atlas Store ones.Fword protocol supportshigh availability and load balancing.Recent versions also keep alive.You can use this protocolto cover the high-traffic environmentwith high availability.This model has pros and cons.The hard toolsexplains this complex modelin this session.If you are interested in this model,please check the previous talkat the KubeCon.We have aFord-side aggregatoror aggregator-sideseveral approachesin this model.Next is a talk about the containerand Kubernetes.Friendly is widely usedin the container environment.For example,by Datadoc Research,Friendly is a fourth producton the running on Docker.Friendly is used in theLogs on the Docker environment.In this chapter,I will talk about how to collect logsfrom the container environment.Before the talk about the collecting,I will talk about the resourcesfor the container environment.You can use Friendlywith thesesetup.One is Docker and Kubernetesand HelmChart.Friendly Community providesPlaned Docker imagesin the RPN and Debian.This Docker imageis Docker official imagesso you can run this containeron the ARM or PowerPCor more platforms.So if you are usingPlaned Docker,not Kubernetes,you can use these imagesfor your environment.And on Kubernetes,Friendly providesKumai's demo set,settingand the Docker images.This image hasseveral built-in destinationso Kafka or Elastic search.And of course,you caneasy to use your destinationwith these images.And I'mI'm not sure but the HelmChartprovides the Friendly imagesas a stable version.So you can,if you use the Helm,you can also use this HelmChartimages on your environment.Okay.And next ishow to collect data fromthe Docker environment.So this isDocker specific approaches.So Docker logging witha logging driver.Docker has a logging drivermechanism to send datato the external systems.So for example,local filedcpDocker logging driver supports theFriendly logging driver by default.So you caneasy to collectDocker logs with the Friendly with this logging driver.Thislogging driver collects thestandard art,standard error datafrom thecontainer instances and send tothe Friendly withthe client library.So,if youset theforeword input plug-ins in Friendly'sside,you can collect thedata from this logging driver.This isDocker specific approach.So if you use thePlaneDocker environment,youcan use this approaches.This approachcan't use on the Kubernetes orother environment.Second approach is usingFriendLogger.FriendLogger is aclient library for each language.So Ruby has a FriendLoggerRustLogger has a RocketLogger.So this approach isyou need to add thelogging code in yourapplication.But you can useany data format or any datawith this client library.So no need to passor something.So this issame as theFriendedLoggingYou can use the forward input in theFriendly instances.Youcan collect data from this client library.So thismade is so you can collectany data in your application.Third approachesis using the shared datavolume.Sometimeswe can'tchange the loggingsetting of the container images.For example,middleware usesonly local files.Orfor this caseswe can use theshared datavolume and theinput tail plugin to collect local files from thecontainers.On Kuberneteswe use the similar approachesto collect data from thecontainer instances.Becausekubernetessendcontainer instances data to thebarlogcontainersdirectly.Noneed to access to the containerstandard error or send it out.So we needto use the tail pluginand the pluginto collect data from thecontainer logs on thekubernetes environment.So ourkubernetes demo set imagesuses this settingon the collecting allkubernetes container logsand send todata to the Kafka or something.Butourkubernetes demo set imageshas the additionaldata to thedata logs.Becauseonkuberneteswe sometimes want tometadata for the namespaces or container name or podnames.So wedid thekubernetes metadatato thedata to thekubernetes dataand add this metadata into thedata streams.Soifby default,sologdata is very simple.Sologfield has actuallogs.But after applyingthis filter,thelog has more metadata forthe Kubernetes.So this isthe example of the container namespaces.So you can use thismetadata on the processing side.Sogrouping the containerslogs or something.So this is the summary of thecontainer logging.So we canI show the four approaches.Sodocker-slicic approaches orgeneral approaches with theclient library and usethe shared volume forthenon-customized images.And on kubernetes environments,wecanuse the metadata tothelinked data processing.Soyou can use these approaches foryour needs.This is the last chapter.So I introduced the front bit.Front bit is animportant ecosystem on the frontd.So this is a comparison tableof the front bit and the frontd.So frontd is awritten in ruby plus c,butthe front bit isfullly written in c.Sothis is a very smallapplication.Soif you want to deduce the memoryor cpu usage,youcan consider the front bitin c of the frontd.The front bit design is verysimilar to the frontd.Sofront bit event also has atag and time and record.Andfront bit also has an event for the tag basedroutine.So you canuse the frontd knowledge for thefront bit.The front bit also uses asimilar approach,thesame approach in thekubernetes environment.So you canalso get thekubernetes metadatain the datapipeline.Soyou can use thefront bit on the wholeside instead of frontd to deducethe resource usage.Currentlytheir patterns are very popular.Sosome users deploy the front biton the frontd.coorders side and set up the aggregatorsfor collecting the thousandsnodes from the aggregatorand send to the destination.So you can get the front bitsmate andfrontd mate inthis combination.Therightest front bit has astoring processing feature in thecore.So you canaggregate or some morecomplex calculation on theforward side.So this deducesthe network traffic or somemore in theforward side.If you are interested in thisstoring processing on the edge,please check the front bitdocumentation of theprocessing article.Okay.Endure login.Thank you.Any question?Is it possible touse java language to write aplugin for frontd?Java language plug-in?So youwants to write a javalanguage?Maybe just java isan example.What else languageis supported to write a pluginfor frontd?Soyouwants to collect the java applicationlogs?Actually,just talk about plugins, right?There could be a lot of plugins.Sowhat kind of languages are supported towrite a plugin?So java-basedplugin currentlyonly ruby.Okay,only ruby.Only ruby.Okay,thank you.OneI'm sorry.So front bitis written in C.So front bitis we can let theplugin as a C language, but frontbit supports the goal language asa plugin.So maybe wecansimilar approach tousing the otherlanguages to the pluginlanguages.This is thefuture work.I want to know howfrontd to handlethe log froma stand out.A standout?Yeah.Youmeans,forexample,my applicationjust write a loglike printf.And then howmyfrontdto collect the logfrom the standoutbecause from the standout.Yeah.JustI just...Sothis is on the containeron the container over there.Meta?Yeah.It's justsomething like I run thecommandcubectlto get the loglog theport.And I will findthe logmessage.Right?And howcan Iimportthe logmessage intofrontd.Yeah.Soif you usefrontd hasstandoutoutpluginis inside.Soif you sendlogs to pipe to the frontdyou can use this plugin.Butif you use application on thecontainer or Kubernetes.Soyou can use this approach.Soyou might as sendlogs to the barrel container.Soyou can use the ink to plugin.Or if you useplane docker,you can use the loading driver.Is there any plan aboutopen telemetry?Open telemetry?Yeah.I am considering the supportfor the open telemetry.But open telemetryis to ensure theprogress of themigration from theto the project.Soif you finish this migrationand if weor publicthe open telemetry will be library.Sowe can easy to support thisbe as a plugin.Yeah.I have a question thatin the logging systemsome some we need somebuffer.Forexample Kafka toslow down thestream.Andthe end of the anotherin this architecturethe frontdcan be a there is an aggregatorin frontd.Candisagree gator act asa Kafka role in thehost message.Soyou can use theflipd instead of thekafka on the globalpuffer.Right?data pipeline buffer.Youare.Maybe I think thekafka was as aproduct is better than thefrontd.Because Kafka hasreplicated data to thedistributed toclusters.But the frontdis doesn't supportthis model.Soif one instance is the crashes andthe hardwarehdd orhtd the crashis hard to recover fromthe data.Sofrontd is aggregator is mainly forcollecting the data from theforwarders and merge the datastreams into the one and sendto the destination.Sowe recommend to theput aggregator on thebeforekafka.Becauseforexample one customer usesfrontd and thekafka combination.Butthe forwarder is verylarge number of forwarders.Sosometimes thekafka isdown becauseby the highwork load.Andthis one customerset up aggregators beforethekafka.Sokafka becomesstable.So if youwant to use global queuethekafka was better.But aggregatorhelps thestable operationforkafka.Soone question about the metadataof the workloadon Kubernetes.Soyour ppt your slide just mentionedwe can append somemetadata to the log.Sofor the workload.Sothis kind of appendingcould be done automatically orwe need to do it with the code.This one?No.So your slide about the metadata.Yes.On theKubernetes workload.Oh yeah.Right.Yes.Soyeah.You can seeyour append the namespace andcontainer name.So just mentionthis kind of metadata we shouldmaybe we need appendto the log,right?Yes.So it can be done automatically orwe need to code something.How tocorrect this metadata?Right.Thisis calling the API serverKubernetes API serverperiodically.Souplating the metadatafrom the Kubernetes API serverif adding thedata from thisdata.Sowhat you said,sofromd will talk to the API serverand append the metadata intothe logs automatically.Okay.Thank you.First of all,thank you for the talk.Thank you.I have a question.So compare with other competitorswhy should I chooseFriendly?Yeah.In general justWhat's the company?Sorry.Other competitor like Logstash oryeah.This is depend onyour needs.So Logstash isdependent on Java.So this issometimes JVM is verylarge on the containerenvironment or something.This ismaybeif you arespecialist of the Java or JVMif you want to the collecting to the elastic search,maybeyou can sometimes Logstash isbetter.Okay.Yeah.ButFriendly is a general collection.Somany users is elastic search or Kafka or S3 or more as a cloud environment.SoFriendly has lots of plug-in.Soyou can choose which isbetter on yourour workloads.Maybeveryfeature is verysimilar to the recentlog collectors.Okay.Thank you.Sorry.One questionabout Fluentbit.DoFluentbit have some solutionforlog data withdifferent priority,especiallywhen the output bandwidthwas very limited.Property meansbundled or something.For example,we have two differentdata source for inputand they have differentpriority.Thenwhen the output bandwidthwas very limited,we wantdata with higher prioritywas sent first.Soifyou have two data sourcesand one in the high priority,youwant to process the datahigh priority fast.Kindlymaybe no.Okay.Ifyou have a good idea,pleasepost it on theproject.Thank you.Okay.Thank you.