 Okay, let's get ready for the last session today great that so many people are still here before the beer I see some already have beer. That's also good My name is Kai from confluent and I want to give you an introduction today about ksql The open source streaming SQL engine for a patchy Kafka before I start with you knows a patchy Kafka or is using it already So that's almost everybody. That's good for this session. That's great So this session is around 20 minutes of presentation and then I will give 15 minutes Live demo so that you really get a feeling about what ksql is and how easy it is to use So first of all most of you know a patchy Kafka already, but let's take a little bit of look at the history so Kafka is already many many years old and Around four years ago confluent was founded which is more or less the company behind a patchy Kafka and Then a Kafka connect and Kafka streams were introduced as part of the Apache Kafka framework And now around half a year ago and we G8 ksql So it was announced around one year ago and since half a year already it's available as GA So really ready for production and that's a good reason today at Big Data Spain to talk a little bit about it And so you see the different use cases when to use it and when maybe not to use it So that's the main idea of this talk Here you see what ksql is or what part of a patchy Kafka ecosystem it is So Kafka is not just a messaging layer. It's much much more as I said for a minute ago Kafka in the middle. It's the brokers So you produce messages and consume messages in a scalable way and then you have Kafka connect for integration as you see here For example, my secret integration as source or elastic search as think and then you have Kafka streams for stream processing natively on Kafka so that's what the Apache Kafka framework is and now we also have ksql Which is one of many confluent open source components Which is on top optional as additional tool for Apache Kafka and as you see here It's not an accident that it's on top of Kafka streams, but it's based on Kafka streams So that's main main goal. They're engine under the hood in the end It's Kafka streams, but but as user of ksql you don't see that So why did we build ksql? Even though there is Kafka streams Kafka streams is built for core developers So if you're a Java developer on the Java platform with Scala or something like this Then you can write stream processing applications on top of Apache Kafka without any additional tooling like spark or fling or storm You can do that all just with the Apache Kafka cluster and do things like aggregations filtering transformations and all the use cases you have for stream processing However, that's still too complex for many other users. So either if you're not a Java developer So let's say you want to write Python code or net or something else Or maybe you're not really a core developer, but more like a data engineer Which does data processing filtering enrichment. Maybe you're even a data scientist, which does these kind of things Then ksql is the real tool for you now because it's not Java coding It's much easier to use as you will see But you can leverage the same benefits under the hood like scalability high throughput and so on So just to be clear here ksql still it's not for BI analysts So you still have to be able to write things like SQL code. So it's not for a business user You have to be a technical person or technical understanding to use ksql But you see here. It's really a different expanded realm of who can now access Apache Kafka and process the data Here we see from another perspective at the bottom you see Apache Kafka That's the core brokers and on top of that you write your clients and this now can be for example the Java consumer and producer API This is relatively low level. You're very flexible here. You can do whatever you want, but you have to write a lot of code That is why Kafka streams was introduced two or three years ago for stream processing It's a wrapper around the producers and consumers with a lot of added functionality So that you can build stream processing much easier on top of Kafka with this wrapper and and framework And now on top of that we have one more tool We have ksql which leverages leverages Kafka and Kafka streams under the hood But so to it you can build stream processing applications just with writing SQL like queries And that's in the end what I want to show now and I'll show you in much more detail So before I go really into the technology and the concepts and how that works and the architecture behind that I really start with different use cases so that you understand when ksql might be the right choice for you It's always these 80-20 rules Okay, SQL will not be for every use case but for some use cases will be much much easier for you Even if you are a Java developer use the thing about isn't ksql a much better solution for specific problems So here's the first example ksql can be used for data exploration exploration and debugging So even if you don't want to use Kafka streams or build stream processing applications So if you just have an ingestion layer where you send data for example from my SQL to elastic search or whatever You can just use ksql for debugging and exploration of the data So for example there's a show topics command I will show that all that in the live demo also to analyze the stream data the content of the data Or you can do simple select queries to debug existing data flows So if you have a big-scale Deployment and want to take a look into that in real time in the real-world example You can just select and query maybe for only your test user and have a continuous stream of that data So as you see here in the examples, it's really simple to use and if you spend 20 minutes with it I'm sure you can use it on your own Kafka cluster already with your topics and with your logic However, you can do much much more than just this kind of debugging and analyzes that trust you just really the first step Another example is transformation. So there this is still Relatively simple stuff, but very important So you often have to do things for your existing Kafka cluster like changing the number of partitions Because in the beginning of your project often you don't know that later you need more petitions Then you have to change this or sometimes you have to convert your data Let's say all your incoming data is chasen, but you decide in the future you will use afro instead or some other technology Then you can convert this or you have to repartition the data That's also things which happen in Kafka from time to time for different reasons That's in the past what you did with command line tools or with coding and Java here You just write a sequel queries for that. This is one example You can use that on your data for doing data transformations on the existing topics and Kafka infrastructure However, that's still you can do much much more So that was just examples what you can do very easily, but what actually ksql was built for is really streaming Continuous processing of data. So even though this is just sequel queries You can deploy them to production and scale them up and down and it's used usable for hundreds of millions of messages and hundreds of Notes, so it's the same story like for Kafka streams or for any other Kafka client Even so you just write a sequel command here And here is one example where we do this kind of streaming ETL where we filter and only want to get our platinum users in this case So this query is written once and then it's deployed and then it's running continuously forever until you stop the query So it's not just like with an oracle database where you do a request response like variant sequel. That's very different So here is one example for doing some kind of streaming ETL So here we again have a my sequel database as one example We use Kafka connect to do the integration also based on Kafka The huge advantage of Kafka connect is that you don't need another integration layer like a ETL or ESB tool Or any other ingestion layer because then you have to manage two clusters and Guaranteed that the data is delivered without data loss and Replication and all these things which can go wrong Kafka connect like Kafka streams and ksql natively run on top of Kafka So you just have to manage one big cluster in the middle to to run all that and then with ksql here in the middle You process the data and then you get the data into elastic search in this case again with Kafka connect But this could be any other Kafka consumer. So that doesn't matter This is just in this example here to show you one example from change data capture So pushing data out of the database Processing it with ksql on Kafka and then sending it to a sink in this case elastic search So it's all natively Kafka stuff like you know it before just that you use ksql now for processing instead of any other kind of application So you can do more of ksql So for example, you can also do things like a real-time enrichment and that means you can do joints and powerful aggregations So you can for example either do stream table joints or stream stream joints I will talk about these concepts more in some minutes But the main idea is really that you can combine different streams and join them So you know that from an oracle database with SQL you do different joints of different tables One thing to understand of course you cannot do a join with many many different topics in one query So it's not this kind of index query where you can do very complex things It's really for joining different streams together to process the data continuously. That's the main use case here Here is one example for that. So in this case and for a retailing example We have a stream of sales data From online and offline stores, which is fed into Kafka and on the other side We have another information about shipments that arrive that's also real-time streaming data in this case and then with ksql you can join this data and Process it in real time. So depending on your use case You can just join it or then do things like filtering aggregations transformations all of that can be done with one or more ksql processes before it is then Then sent in this case into my SQL database But here again, that's just one example any other Kafka consumer could also get this data The point is that you can do this join at scale and you can deploy it like any other Kafka application And that's the huge advantage here by just writing SQL queries You can do many powerful things at scale with the same reliability as any other Kafka client So another use case is real-time monitoring. So that's pretty easy to do with ksql So here we see one IOT example where we integrate with sensor information Which can again come from anywhere via Kafka connect or any other Kafka application and then you write one query in this case we do account and we do account with a Tumbling window with size of one minute to analyze If we have a count of bigger than five and in this case It's an error and then we can send this error to another system It can be a real-time dashboard. It can be a mobile app of the operator can be anything any other Kafka consumer can again consume this information Which we get out of this case SQL query So this is another powerful example. Here is a here's a use case for that connected cars we see that really a lot in in projects and You can either combine different sensors for example from different cars or different devices or like in this example the streaming data like the cars which is millions of events per second and Combined it with more static data like my sequel again so for example, you might have a customer database and a my sequel storage and this also has Data which changes but not that often and in this case Therefore you combine the stream of the car sensors together with the table information of the database table Both are injected into Kafka in real time and then aggregated and combined by a case SQL query so it can be different scale as you see here the sensors are large scale and Combined and joined with low scale of my sequel and then again It's fed into another system in this case use a custom Kafka streams application So again, I repeat it again and again because it's so important You can use any Kafka consumer here again after you have processed the data with case SQL on top of Kafka So the last use case I have this is anomaly detection. This is another good example I'm this is also very powerful with simple examples So this is hello world here, of course, but you can do more powerful things But here in this example, I think you understand well how that works in this case We aggregate data and use 30 second Windows so in this case we want to say we get all payments the card number for credit card and count it and If within 30 seconds in this example, we have three or more payments by this credit card number Then we send an alert to another system. So this is a very simple example, right? But this shows you how you can build even more powerful stateful queries with With case SQL, so it's not just about streaming ETL or so That's one use case where you filter just one incoming message and transform it and send it to the next Output you can also aggregate data and build stateful applications like here So here's one more example and this is actually really now a little bit more complex one I have also shared here the github link where I have the implementation in this example We do sensor analytics and in this case, I've built a user defined function So while case SQL has many functions out of the box for summarizing things or aggregation or filter You can build your own User defined function easily in this case. I've built up predicts a powerful one so I have trained an analytic model with tensor flow before using Google cloud and and Autoencoders in this case the technology doesn't matter here. You can use that with anything. The point is I have embedded analytic model built with some machine learning framework and embedded it into this function here and When now the end user which doesn't know what machine learning is or how you program that he just uses this function here and applies the analytic model to the streaming data So here we have one stream where we do anomaly detection We get the sensor ID from the car and apply the user defined function to the sensor value Which in this case is some engine sensors about temperature and the analytic model under the hood in real time That's the predictions so that we find anomalies where again we can send alerts to an emergency system for example or send all the information to an elastic search like here to analyze it later both for the for the for the alarms and for the non alarms So this was now a lot about use cases Let's now also talk a little bit more about how to build that because that's also a key difference to many other of these Stream processing frameworks case sequel the same like Kafka streams. It's pretty light-white and simple It's a small application which you build you can build more instances of that and scale it up and down dynamically at runtime Without data loss and replication and all the things you know from Kafka But the point is it's small light-white applications or micro services So in contrary to other big data frameworks like spark streaming or fling or storm You don't deploy all these applications in the one big data cluster where you then have to handle things like scheduling and resource Allocation and all these things in this case you build your use case independently of the other projects and teams and Deploy and manage it like you want so you can do a B testing with your application You can deploy a new version do whatever you want. It's completely independent of all the other applications So not all of these have to be case equal applications You use whatever you want case equal Kafka streams net however you build your application, but they're all independent here That's very important and that's also true for deployment then so as it's just a small light white Java process in the end running You can run it anywhere and one of these projects We just saw could maybe use just Java processes running them on bare metal And the other one is much more modern with cutting edge technologies and deploys it in Kubernetes or in the cloud to scale it up And down dynamically. All right, so every case equal application like Kafka streams can be deployed like you wanted independently of all the others that's the huge advantage of that So let's talk a little bit more about the concepts behind case equal and that's really the main point why we build it So you don't have to think about all the low-level details of Kafka Right, that's the point you write sequel queries And you don't have to think about the serialization and deserialization and the generics and lumped us all these things which Java developers love But some others don't and so for many things. It's much easier without thinking about this However, and that's really very very important You can still leverage all the features of Apache Kafka and Kafka streams So from Kafka you like know things like it's a distributed systems for high-volume fault tolerance Scalability all that is built into case equal under the hood and also all the Kafka streams features like the windowing for Aggregations the event time aggregation later arriving data and even things like exactly one semantics Which were introduced into Kafka with 011 so around one and a half years ago I think all these features are also available for case equal even so you just write the sequel queries as end user That's all implement under the hood by the runtime and With that it's no surprise really so MK sequel is also equally viable for any kind of use case down So while I on my lifetime in a minute I will just use my laptop and have one single instance You can then deploy that production and scale it up and down maybe to a few instances or maybe really in a big deployment So it scales like any other Kafka application under the hood You just have to think about things like partitioning So you use enough partitions that you can scale it to enough consumers for example, but that's the same things You have to think about for any Kafka application But having that in mind you can scale it up and down like any other Kafka application And it's really it's production ready. It's GA for half a year already So you can run it and we mean it for a really mission critical systems without any downtime and these kind of things So it's fault tolerant. So it's powered by Kafka. So just one example here to really explain it in detail Here we see three instances running of ksql and now if one instance fails then that's totally okay That's like Kafka. So you have now two instances under the hood You have to do things like rebalancing and migrating some data and stuff that's built in into the Kafka protocol So that's not what you have to care about and it's the same for ksql like for any other Kafka application here And then when you start either the same application or you have another application which you start in another instance Maybe in another Docker container then it rebalances again and uses again all the three instances of ksql So it scales in the same way like any other Kafka application One important concept. I want to mention it again, and I will also show it in the live demo That's the same like for Kafka streams Ksql has the concepts of streams and tables and that's very very important Especially if you want to also build stateful applications and maybe even store Information longer in such an application So maybe not just for minutes, but you can also store it for hours or days depending on what your use case is So the stream is as you see on the left side That's a stream like you know it one event is one action So you have Alice one you have Charlie one you have Alice to Bob one every even is you is by itself like in the Kafka Lock right everything is one message and so you don't lose this data because it's one single event each and If you use patterns like even sourcing or so you can start from the beginning and get every event and process it On the other side the table that's more like what you know from an auricle my sequel database This one updates the information like in a Kafka compacted topic. So you only get the most updated information It's still a stream. So it's always continuously updated But when you do want to consume that you always only get the most recent information for each of the users So in the in the last time example here You only get Alice to and Alice one is already removed here So used streams or tables like you need them if you want to store data length longer than tables are often a better choice here and Both are natively built into case equal like in Kafka stream So you can choose them It's very similar syntax and you just have to decide what to do you can't do joints with both directions So you can either do a stream table joins then also or maybe also stream stream joints That was not possible in the beginning if you follow case equal for some time, but it's in the meantime already available also You have options for windowing. So one important thing here is case equal is not ansi SQL Right, that doesn't make any sense for a stream processing application because we actually want to build continuous queries Which run long term in your system to continuously process data and therefore you have additional options for example Here you see tumbling windows hopping windows session windows. That's in the end to build a different use cases But it's pretty easy to use just take a look at the documentation and build Your application with your windows you need From an architecture perspective case equal has three key components the engine the rest interface and the command line interface And however very important and that's something what new users of case equal are not aware of the beginning So you need this in addition to the Kafka cluster So data is not processed on the Kafka cluster on the brokers on the zookeepers. That's independent of that It's just another client for the Kafka brokers in the end. That's very important to understand to be clear what it is So you can use case equal from everywhere and that's one of the reasons why in the beginning explain the motivation for why we build it So you can either use a web UI. So we in our control center for example have a Development UI where you have things like code completion and other features which are pretty fancy You can use the command line interface in your terminal like I will do in a minute in my life demo You can use the rest interface and from that you can do anything You can do a curl command, but you can also integrate it somewhere in this example I've integrated it into a Python notebook So you can write Python code and from there you can get the streaming data and you can create new queries You can select it you can do filtering transformations all of that and that's in this example for data scientists or data engine Is perfect. They want to use Jupiter. They want to use Python and still you can access Kafka And then also if you you can just use it interactively for trying it out and analyzing data Or you can also later deploy the same query to production afterwards and that's in the end what's happening in the last mode This is the headless mode This means first of all you write your new queries and learn what to do and understand and build your query And then when you have your query, which is running then you deploy it in headless mode That means simply you deploy it into production on our case equal server There you typically don't want to change it either you kill it and start and deploy a new instance But there it's running continuously because it's meant for production at scale Here you see the two different examples. So on top. We have the Kafka cluster This is the Kafka brokers and zookeepers what you have as mandatory before you start with ksql And then we have the ksql instances ksql like Kafka streams. It's in the end Java So here you have ksql servers which are Java processes running somewhere and somewhere is really anywhere It's a docker container. It's bare metal. It's the cloud. That's your choice because it's just Java processes And then you have the interface how you connected that and that is in this case an interactive mode It's the command line interface or any rest client or web UI where do you inject your queries into the servers? And then on the other side, we have the headless mode That's the production mode if we want to deploy it We just add the SQL command to a file and deploy it into the SQL server So that's the two different options you need for development and testing and debugging and then also if you want to deploy to production There then several architectural challenges which you have to understand So I'm typically you don't build one big ksql cluster with hundreds of instances So for example like in this you can to create two dedicated clusters One in this case for some finance ksql queries and another one for sales queries So this is then some best practices and depends on the use cases Do you want to write create one cluster for each query or specific queries into one cluster? That's then detailed questions. We have some best practices on our website for that But that's then really more deep dive questions and discussions But you have to be aware it's possible and and you have to think about that like for any other Kafka application The last point before I come to a live demo I mentioned it before for my machine learning demo So you can also easily build user defined functions both for stateless and for stateful aggregated use cases So that's the UDF or the UDAF and here you see just one screenshot in the end It's just one class of Java code which you which we write it's one or two methods You have to implement and here is your custom logic in my example I simply I trained a tensorflow model an analytic model that is some binary code And I directly embedded it here, but you can do anything. It's just code. You could also do Anything what you have in mind you can embed any library here to remote calls if that makes sense for you In this case or whatever it is So today you have to do that in Java in the future We will also offer a UDF from other languages so that even Python or go users can write their UDFs in their language Today you have to use the Java code with that. Let's take a look at how that looks live So I have to switch my screen. I hope that also works Take some seconds usually and it's really tough for me to type as you see here how I'm standing So it's not the easiest way, but now we see we see the screen. Is that good enough? Yes That's good. Okay. So what do you see here? I have already started my local Development environment. I use the command line interface from Confluent It's an open source tool for development with one command You can start all the components and shut it down later and all these things So I just typed Confluent start ksql server and it started for me all the dependencies including zookeeper and Kafka And so now in the end we have one local development environment And I hear really you see it's really hard for me to type So I will just use from my github example and copy paste on the code So we have already started the ksql server. So now and we also have some test generator Data which we use this by the way is also available on github So even if you are not using ksql, you can use this chest and test generator to easily configure your own data Which you want to generate for your use cases and I have two examples on this use case Which I show here is clickstream data. So I have user data and I have click streams for that and Now I start the ksql command line interface as I said in this case I Used the command line interface the same could be done in a web UI or from Python or whatever So here is my command line interface. It's really pretty easy to use and you can also do basic things like list topics For example, so in this case and we see now the Kafka topics, right? So in this case we have page views and users. That's the two topics We will we will use to process data so First of all another example you can also do things like printing here if you just want to see some data here So it's very easy also doing for doing things like debugging your existing infrastructure Even if you don't want to build stream processing applications But we want to do that now and for that we built one first stream as you see here I always type clears or that's on top and easier for you to read So we create one stream in this case page views original with some attributes here from the click stream And here you see this is related now to one Kafka topic page views so this is a stream I have to create first and then in this case its value format is delimited and so comma separated data and Then we can take a look at the streams in this case. It's pretty simple We have just one stream here and we can describe the stream so that we can see on the architecture of that at a schema of that and With that then here we are here. We can now I'm right the queries and that's the sequel queries We write first of all I select from page views original and here you see the main goal of case equal actually is continuous queries So this is never stopping now until I put in control C It's continuous queries because afterwards you want to deploy them to production at scale for processing millions of messages, right? So I have to stop it here. The other option is this was the page views data We can also take a look at the page views original and here in this case I select the page and the user ID and here you see now I simply said limit 10 in this case you can do in tech if analysis where it automatically stops after the limit you have you have reached So From that I will also create a table again the basic difference is that a table only stores the most recent value So here we create a table for the user data here It makes sense because also user data often you only want to have the most recent data about the address or email or phone or whatever Here you see the format is chasen now, so it supports different formats also And now after with the table we could also show or describe that I will skip this part and Then what's more interesting maybe I will also create a join here as one example So first of all I create female users, so this is like you would expect it from normal sequel You now will create from the users original Another table, which is only the female users and from this I can again maybe print Users a leg query here. You'll see now it prints the data of that which is chasing in this case and then Now I do a join so in this case We see here and I created a join this is a join from page with original left join to users original You also see here are different join options how you want to combine that that's pretty similar to sequel So to ancy sequel and then in this case We only want again the users which are gender female and now as we have created this join Which is a continuous join continuously running we can run it here So here we see I limited it to 3 but if I remove the limit then this is again a continuous query Which is running and in this way you'll see how powerful it is because after testing and you can deploy that to production and scale It up and down to different instances on my laptop It's always only one instance because it's just to demoing it to you and the last very interesting part Which I want to show is the Afro support if you don't know that or maybe many use it So Kafka is as heavy and very good Afro support which has a lot of advantages I'm for example the confluent schema registry uses Afro so that you can really enforce the schemas including schema evolution and all these things and for ksql it's even more interesting So let me just first create another test stream. So in addition to my delimited and chasen data I know also create Afro data and With that we can first again do a print for example to take a look But we see it here already anyway Let's do a print of the data. So here you see this is now data and Afro format, which is under the hood here chasen data and Now we create another stream and now here's one important difference So before and the other streams which I created I had to add the structure here right I for the user data to add the name and the ID and interchange string and all these things because here We use Afro for Afro. It already knows what structure it is So I can create the stream ratings here from the stream topic and from the from the ratings topic And I say value Afro format and here it already knows the structure. So if you take a look at this and Say describe ratings. So you see here It knows the structure in ksql because it's Afro. So it's a huge advantage or makes it much easier to use Afro If that's possible and now here and that's not a price even so in ksql We didn't define the structure here. We can now do select queries here So remember that create stream here was without any structure, but based on that now we can We can select the data here right and for example from ratings or then we can also do things like Just with an attribute where stars Is smaller than three for example, right? So that's pretty easy to use as you see here that was also the main goal of the demo to show and Then in the end, it's I can also destroy all my stuff again So that's pretty nice also if you have never seen confluent command line interface before so let me just exit here And then I just say confluent the story and then it shuts it down again So when I do a demo tomorrow somewhere else, I can start from the beginning or if you Crash something while doing development you can start from the beginning again. So that's really pretty nice But it's only used for local development So this was my live demo So I hope you got a little bit of feeling about what ksql is and how that looks like and I hope with all the use cases I explained you see that it's many many different use cases Which you can use it for very simple ones very technical ones and also more powerful ones If you want get started by yourself and ksql is fully open source. It's on github in addition to that We also have several different options how you get started the first thing I really recommend is the quick start guide on our website or on the on the ksql github website Here you either have the option like me I just ran it locally on my laptop or you run it in Docker containers and with that It goes through similar steps Which I just showed you so that you can understand the basic concepts of that and after that I'm sure you can already start using ksql with your own Kafka topics and Kafka cluster After you have that done that if you want to understand more and want to take a look at our examples We have a pretty cool clickstream analyzes where we also use Kafka connect to integrate data And then as sync we use elastic search and grafana and here you see different examples of how you can build a dashboard on top of that And the funny thing is not just running this because it's really this is just one Docker command You can get it running in five minutes if you have Docker installed and then it's running and the important thing here Is you cannot just take a look at the examples here in the dashboard But you can then also take a look at the sequel queries under the hood which are running So you can understand what is going on and you can adjust it and play around with that In addition to that we also have ksql resypes. That's also pretty cool. That's an initiative It just started two months or so or go two months ago or so So this will be a much more in the future But this is the idea to show you simple snippets of different things what you can do for example conversion from Chasen to Afro or some specific Industry specific use cases like sensor analytics how to integrate syslog data with ksql and process it So this is the beginning just now. We are also happy to get PRs from people But also we are self a bit many more examples here So with that if you want to try it out get to the GitHub page. It's fully open source We are also happy to help you on our confluent community slack where many people are and people are other end users That's not just for ksql, but for all the Kafka components and also our engineers and so on as they are so far even for technical questions And and feature requirements. They are happy to hear that So with that that's in the end also my summary. So ksql is the streaming SQL engine for Kafka It's really many many use cases. You can do it for repartitioning and very technical things You can do it for stateless continuous processing like filtering or transformations of single events before you send it to the next one Or you can do stateful aggregations and as you have seen with the udaf example for machine learning You can in the end build anything and very powerful things and deploy them to production It's GA and ready already for production even for a large scale with really a millions of messages And with that I'm done. So I see we have four minutes left. So maybe there are also some questions There is one. I'm not sure. Do we have a microphone? Yeah Hello You have been talking about it. If there are any problem with the server the replication, but the key SQL is Make a transform. What if all my cluster is down? What happened? It's like his own offset or something similar So the question is also if Kafka broke has us down or all my cluster is down So my ksql is make a yeah is doing something. Yeah with data. What happened? Yeah, so that's the same like with any Kafka application. So it's like offsets as you said and so on So I will not lose any data even if you have a ksql cluster with three Ksql servers if all are down and it knows where to start again That's the same concepts like from Kafka consumers. It starts from the offset where it stops So that's the same reliability you have with Kafka in general Okay, then thanks a lot and come to me if you have more questions and there's one more. Hi. Thank you two questions and First of all Can you do with a the new case equal all the predictions you could do with the basic API? And the second question is I understand you can deploy as you said that in in docker or Kubernetes as a Consumer application, but the server side. I think conference is making effort into putting also the server in Kubernetes is that already in place or or not yet. So for the first question about the API so Ksql does not offer a hundred percent of the API of Kafka streams But we are pretty good there and the main goal is in the end to support everything But there is some limitation so for example in tech if queries of ksql streams are not included in ksql yet So it's not a hundred percent. It's this 80 20 rule again But which is actually a good fit because ksql is not meant for every use case So it's still okay, but we want to improve it But it's not not fully complete as Kafka streams yet and for the deployment part. Um, first of all, you're right We are seeing trends a lot regarding kubernetes and containers and so on so we are also working on that That's actually also more or less in place for us internally So we also have I'm confluent as a service confluent cloud where we host everything for you So that you don't have to manage Kafka you just use it We provide that with SLAs like 99.95 availability and latency and throughput guarantees And what we have already in beta and which is coming out soon also for the customers is that we host ksql with that And that's in the end under the hood running also as Docker containers and kubernetes because our cloud infrastructure runs on kubernetes And the same is then true for ksql also. So it's already there We know how to do that But we also help our customers doing that in their environments if you want but it's already working and we have the best practice Best practices internally to help you getting it running also and that's also possible for production. Thank you So if you have any more questions, you can go Later to ask the expert and an important thing if you have fun like a wallet Please take it to the stand of booth 23. Thank you. Okay. Thanks for coming guys