 Hello and welcome to the Enhance your Kafka infrastructure with Flubio webinar. I'm Grant Swanson, your host for today's session. We will start with a few educational slides on Kafka versus Flubio. Then go into a live demo and then finish up with Q&A. Please enter questions anytime during the session in the questions window. In the handout section, you will see some of our newest content that includes a speaking session on Wassum Technology from our CTO, Seho Chang, at the most recent KubeCon event. We also have content in PDF format that includes a Java versus Rust solution brief, a financial services solution brief, a real-time economy white paper, an ebook on how to enhance machine learning models with real-time data pipelines, and an Infineon Cloud datasheet. Infineon Cloud is a fully managed Flubio service for enterprises. Now we'll cover the agenda for this webinar where we will discuss what are the differences between Kafka and Flubio, why companies are leveraging Flubio to complement Kafka, how to quickly get started with Flubio, and then a live demo with Q&A. Let's talk about some of the differences between Kafka and Flubio starting with programming languages. Kafka was built over a decade ago using the Java programming language. Flubio was recently built using the Rust programming language. Flubio differentiates itself as a technology by providing real-time stream processing and data transformation using WebAssembly, all in a single unified cluster. Smart modules allow for programmable stream processing for clean data. We will dive deeper into smart module functionality in the upcoming slides. At first, we wanted to cover a recent POC that we just completed with a company that serves over 3,000 global banks that enables over 1.2 billion people to carry out their daily banking needs. In the POC, they did benchmark testing on Flubio versus Kafka, and the results were outstanding. The performance improvements include up to a 3x latency improvement, up to a 5x better throughput, up to 7x better CPU utilization, and up to 50x better memory utilization. Why companies are leveraging Flubio to complement Kafka? The number one use case and reason why companies are leveraging Flubio to enhance their Kafka infrastructure is to stream clean data to a Kafka topic. The data collection pipeline shown in the diagram starts with an HTTP service and a Flubio HTTP source connector. Streaming data flows into the Kafka sync connector, where a smart module is applied. Smart modules are one of our premier features allowing users to have direct control over their streaming data by providing a programmable API for inline data transformation. Finally, the transformed data event flows to the Kafka consumer. Users can quickly and easily stream clean data to Kafka with no external ETL tools. With stream processing and real-time transformation, companies can stream data from a sale, a shipment, or a trade and perform any transformation that are needed before it syncs to an application or database. The two most common use cases are to build rich front-end customer experiences and real-time back-end operations. We believe that in the coming years, there will be a fundamental paradigm shift in data engineering, where companies will move away from traditional extract, transform, and load, or ETL tools, to an STL infrastructure or stream, transform, and load. The Infineon approach to intelligent data streaming includes a new concept called smart pipelines. Smart pipelines are unique to Infineon and include WebAssembly smart modules that can process and transform data with single-digit millisecond latency. Business logic can be applied to ensure data quality. Smart modules can be deployed at source connectors, sync connectors, or within the stream processing unit eliminating the need for ETL tools and providing a single solution with distributed intelligence and centralized control. Now we're going to hand it over to Alex Mekeloff, our architect, and Sebastian Imley, our principal engineer for a live demo. Welcome, Alex and Sebastian. Thank you, Grant. Give me just one moment. I'll hand this over to you. There you go. It's all yours, Alex. Thank you. As Grant mentioned, we are going to demonstrate the Fluvio data processing pipeline, and we are going to do it in two stages. The first one, we go via HTTP connector, connecting to FinHub API, then populating Fluvio topic, and then via Kafka sync connector, writing data into Kafka topic, and then we are going to expand it by adding aggregate smart module, which will add a specific data transformation before writing data into Kafka sync connector. So the transformation will happen only before data moves into Kafka, so as Grant mentioned, the Kafka will receive already clean data. And we are going to start from installing Fluvio local cluster. So this is a fresh instance in EC2, and what I'm going to do now is to install Fluvio using our open source Fluvio cluster. So as you can see, it's downloading and storing, which is 50. So then we are going to start Fluvio cluster. This will add Fluvio command to the path. So that's Fluvio local cluster is spinning up and installed inside of the MiniCube. So that instance already had MiniCube installed, but you can follow our website for prerequisites. So now I'm going to test Fluvio cluster. So I'm going to create topic greetings. And then I'm going to create a single message via Fluvio produce command. So currently I'm using Fluvio CLI. And then I'm going to consume the greetings topic again via Fluvio CLI. So as you have seen, it's quite straightforward. So we installed the Fluvio cluster and we created a topic called greetings. We have created it and then we consumed it from the beginning. Now that instance already have a MiniCube. And it also have a pre-installed Kafka local instance, which is a development instance. Nothing resembling to production, but it's already have multiple topics incorporated. So this is the local Kafka cluster. So what I'm going to do next is I'm going to create an HTTP connector, which will read the Fluvio CLI API. And I'm going to do it building up on the demo which Sebastian already demonstrated in the previous webinar. So you can clone our previous webinar demo, but the idea is that you need to register on PNHR pin hub IO and obtain API token. Then you need to create a configuration file, which looks like this for HTTP connector. As you can see, that's the name of the connector. That's the type of the connector. This is a topic which is Fluvio topic. The data will be read. So also it says that it will create a topic, which if it doesn't exist. And then it will query the API endpoint using get method for every three seconds. And we have a make file which can very conveniently demonstrate your code around multiple commands. And this one we are going to create a connector by Fluvio connector create command. As you can see, it's quite straightforward. Fluvio connector create. And now I'm going to check. So the connector is running. And let's check that it is actually producing a topic. So I'm using Fluvio consume command. Again, much in the topic named genie stocks. And as you can see, that's what we did. They've stock data every three seconds. So now what we are going to do is we are going to write into cluster. So as I mentioned, I already have a cluster which we created using Docker compose. And Docker compose file is provided the example, which will be shared after the webinar. And the important point is that it have advertisement host, which is matching the gateway for minikube. So our Fluvio cluster can actually connect into Kafka. So what we need to create first is the game configuration file for Kafka. And it looks pretty similar to HTTP connector. Except it's so name type is coffee scene picks up the data out of topic genie stock and then the rise it into Kafka process Fluvio topic. So let's create a Kafka connector. Checking that the Kafka connector is running. So Fluvio connector logs command can help debug if there are any issues. They've Kafka connector, but no issues so far. And let's check if we actually populating the Kafka topic. So, and we are going to do it using Kafka console consumer, which is available inside the Docker, which we currently running on that server. So to change topic, this one should be Kafka process Fluvio. Yeah, as we can see every three minutes we are populating every three seconds we are populating Kafka topic. Right. So that was basics introductions. We have created two connectors in Fluvio. One is HTTP connector and another Kafka scene connector. So what we are going to do next is more interesting. So we are going to add the smart model and it is aggregate it will be from the aggregate function. So that smart model and pretends that we purchased some stock warrants. And we are going to populate them using the text file warrants txt. And it will reshape our tick options and try to speculate potential profit and all loss. So what I'm going to do is again going to the Boston's demo. And so that one of it is the rust is written in rust. And what we are going to do is to compile it into WebAssembly. And then we are going to upload WebAssembly into smart models repository in local Fluvio cluster. And then we are going to validate it and attach it to Kafka sync connector. So again, there is a convenient make file which is already online. So that's already applied. We can do a cloud smart model. And as you can see, those are just commands which are pretty long. But this Merck file on the wrapper for Fluvio CLI. And what I'm going to demonstrate that we now uploaded price warrants aggregator. And now we need to produce warrants. Assume that we actually make a purchase. And that is very simple command which again uses Fluvio CLI to relate GMI stocks topics. And then we are going to check if actually our smart model is working. Okay. And as you can see, when smart model was applied, then the shape changed. And it also could create current profit which is actually lost. So if we potentially bought the stocks at 11 for April at various purchase price, which then be probably at loss at this point in time. So what we are going to do next is we are going to deploy this smart model. So it will run inside Kafka sync connector. And for that, the only thing we need to do is to add that smart model across warrants aggregator needs to be applied. And it is an aggregate smart model. So what I'm going to do is again to create a Kafka connector. And you can see now we have a single HTTP input, single HTTP source. And then we cover two coffee sinks. And one of them is running deep smart model. And another thing is running deep smart model. So let's read the topic and validate that our smart model is Kafka gate function. So again, I'm reading directly using Kafka consumer. And you can see it's currently zero. And the reason for it is that smart model and Kafka connector is applied at the leading state. So what we need to do now is we need to populate our warrants again because they're already in the topic when the connector started reading records and applying the aggregator. So we could use warrants again. Now we are running our Kafka connector. And as you can see, the aggregates smart model was applied and we currently have current profits, which is actually a loss. So to reiterate what we have demonstrated in such a short period of time is that we are able to install Fluvio and we are able to create HTTP connector, which reads data into Fluvio topic, then applies transformation, which is written in Rust and compiled into the assembly. It runs before Kafka sync writes it into Kafka topic. And that concludes the live demo part and we have now spoken a lot of the questions. Awesome, Alex. Thank you so much. That was pretty incredible that you got all that done in about 20 minutes. So give me just one moment. I'm pulling up the questions from the audience. Again, if anyone has any questions, feel free to enter them into the questions window even the chat box is fine. And I'm going to present my screen. Okay, great. We got one question that just came in here. So they are asking what other types of data transformation can be performed with Fluvio? So off the shelf, we have predefined templates for filter map, filter map and array map and aggregate smart models. Each of them applies as described transformation, but you can put more and more complexity depending on your Rust experience into smart models. Awesome. Okay, we've got a couple more that are coming in here. So I'm just going to read the actual comment and the question here. It says incredibly elegant architecture. K8's native and connectors plus smart modules and wasm look like this could be the building blocks of a universal streaming platform of the future. Are you planning on developing a high level streaming framework akin to Flink or Kafka streams on top? I can talk about this. I'm Sebastian Imley. I'm a software engineer here. Yeah, this is a high level strategy question. We've discussed this type of thing for a little bit of how to visualize various streams and how to have more of a graphical interface for defining your Lego pieces that go into a data aggregate and ETL sorts. So it's definitely something that can be put higher on the roadmap should it be needed. But for now it's in the backlog. Excellent. Thank you, Sebastian. Appreciate that. We've got a bunch more questions coming in here. So the next question is, so this is, I guess, about a replacement. Do you have an opinion on when Kafka is suitable candidate versus Fluvio? And like I said, I'm assuming that's a Kafka replacement for Fluvio with Fluvio. That's a tough one. It really depends on your use case. We're not quite feature complete with respect to Kafka. One of the things that I will say that's nice compared to Kafka is that we were able to deploy more easily. And my experience setting up a Kafka cluster is kind of a pain. So I would definitely say that if setting up a Kafka cluster has been your friction, that Fluvio is a pretty nice thing to deploy. And we're not trying to be entirely feature complete with respect to Kafka either because Kafka, for example, doesn't have the smart module features that we've shown here. But I will admit that Kafka does have more connectors to various services because the product doesn't exist longer. I agree with you, Sebastian. And I'll just add to that answer that we're continually building out our connector catalog, but definitely Kafka's got more options to choose from source and sync connectors. So as time goes on, we'll cover that gap, I believe. Okay, next question here. When will Fluvio support windowing and processing windowed data? That is a hard... That's not a question I know off the top of my head. It has been discussed. I can't give you a distinct timeline of this thing. I would have to discuss with our product, CEO and CTO SAO and AJ more. Yeah, okay, great. So we'll respond to that question offline. And we'll move on to the next one. So what are some of the limitations that Fluvio might have that Flink does not? So... Go ahead. Flink is more geared towards analytics where Fluvio is more geared towards real-time streaming at the core and then processing at that point in real-time streaming. So we are slightly for different audience rather than for like a replacement. So I don't see Fluvio being a replacement of Flink directly. And that's why even if Kafka and the rest of the infrastructure, we probably will complement quite a lot of the existing tools during various transition states for organizations. And as Sebastian said, we think about ourselves as faster, easier to deploy and easier to manage. Thank you, Alex. We've got a number of questions coming in here. So we'll continue to answer them as long as we have time. We have about 10 minutes left in the webinar. So the next question is, can the payload that Fluvio carries be binary? I'm thinking protocol buffers, et cetera. What are the types? Example, in Kafka, the client can handle the smarts about payload streaming and sizing. So can the payload that Fluvio carries be binary? Yes. On a low level, the actual payloads are all binary. I think there was some discussion, like you mentioned, basically some kind of metadata around the payloads. We don't quite have typedness for the payloads just yet, but building that on top of Fluvio streams isn't particularly hard. So there's no reason why you couldn't say put images or even video into a Fluvio stream. You just bundle your frame of a movie or a video into a given Fluvio record, which would be big, but there's no reason why you couldn't do that. In the case of mentioning GRPC, I think I have no reason to believe that you couldn't put GRPC streams into Fluvio. So yeah, it's binary and you can do this. For the cases of metadata and typedness, you need to do it by convention rather than Fluvio type checking. Excellent. Next question. Does Fluvio have a way to duplicate jobs to avoid modules processing the same record twice? That has been a discussion we've been looking into. Right now, no. But having stapleness on the stream is an area of work we've been looking into. It is near on our roadmap. The examples of this would be resuming from an offset for a given consumer stream. Excellent. A couple more questions here. Please keep asking the audience. If you have more questions, please continue to enter them into the questions window. We have two left here. The next one is likewise, what are some limitations that Flink has that Fluvio does not? Alex, do you get this one? I think this one is difficult to answer. What are the limitations of Flink? Probably people who are using Flink in production will tell more. As a ground set, we are an upstarting platform. We are building towards connectors. That's where if you already have a large infrastructure, then we can complement it. If you are starting from green field development, then we probably will allow you to progress way faster than Apache Flink or Kafka. To expand on that, I think one area that Flink probably lacks is the ability to do more edge nodes for Internet of Things devices, such as running Flink on a Raspberry Pi. My hypothesis or my suspicion is that it would be difficult to deploy Flink on edge nodes of a various infrastructure. Whereas Fluvio, we have an installation for Raspberry Pi. Should you want to do some kind of IoT thing? That's definitely an area where I would say Flink is lacking. Excellent. Appreciate that answer, Sebastian. Looks like we have time for one more question, which just came in. And the question is, does Fluvio support any flavor of SQL or is it in the plans? Yeah, I can talk about that. Right now we support a very low level representation of Postgres. And it's mostly currently used for CDC between two Postgres instances. It is in the plans to expand this to do Postgres from Postgres to MySQL or vice versa. It depends on if someone shows interest in one of these things, we would definitely love to add it. So it is in the plans, but we'd love to make it a higher priority. I would expand in a way that we would like to validate our connectors with use cases. So if you think that Fluvio can be useful for you in particular and have a particular use case in mind, I think that will be a really good opportunity for us to validate our platform and for you to prototype our use cases. Excellent. Appreciate that, Alex. So I would really appreciate the audience being so engaged in asking so many questions. We're going to wrap this up with a few call to actions. So it's very simple. If you're interested in signing up for Infineon Cloud, which is our fully managed Fluvio service, you can do that at infineon.cloud slash sign up. Or you can go to our homepage, infineon.com and click try now. We also have instructions on how to build a streaming app. And then we also have the ability for anyone to schedule an event streaming processing demo. And we're happy to engage with anyone in the audience if they're interested in testing out Fluvio and getting a POC up and running. We'd be happy to engage with everyone. At this time, we're going to wrap this up and I appreciate everyone's engagement. And thank you so much and enjoy the rest of your day.