 All right. Okay. Hello everyone and welcome back to the second presentation of this session which is called intelligent applications. If you're new to the session, welcome. My name is Manfred. I'm a sales specialist for application services in middleware across EMEA and in this session now I'm joined by Thomas and Clement who will talk about Apache Kafka and Quarkus. Thomas in his main job is a barbecue fanatic and on the side he also works for Reddit and governs Quarkus development there and then we also have Clermont in the presentation who is again as a main job. He's a rabbit shepherd so you may see some rabbits there in the background at some point and on the side as well he's working a little bit on Kafka and Quarkus. The presentation, the slides and the recording will be made available to you. You will receive an email in a couple of days with instructions how to get access to those and with that gentlemen I'm handing over to you. Please take it away. Thank you Manfred. I wish my main job was barbecuing actually but it's not really true. So I'm just barbecuing on my free time basically but yeah the demo we created for this today is going to have some influences both about barbecuing and rabbits so hopefully it will be fun. So Clement do you want to say anything before we get started? Yes so well thank you for having us. My name is Clermont. I'm also I'm not a rabbit shepherd on my full-time job. I'm a distinguished engineer at Reddit working mostly on Quarkus and even driven architectures and yes on the side I also have rabbits, well rabbits so you may see some jumping around me during the presentation. Why? Well they must be high jumping rabbits. Well they go on the on the seat there so they say hi. So let's get started then. So this topic is about modern data streaming with Quarkus and Apache Kafka. So before we do anything we're going to cover a bit what Quarkus is so if you haven't heard about Quarkus we'll cover that and yeah so let's get started with this. So if you're familiar with Java, Java's been around for 28 years I think now and Java has been designed has been used for many different architectures and we during those 28 years we've seen seen involvement in architectures so we see Java going from mainly being used in monolith then starting to be used in microservices and now also more and more in functions and and one thing and the reason we created Quarkus was because maybe not every runtime fits every every pattern here. There is different there are different things that makes Quarkus really really effective about using it for the new architectures so Quarkus is is really a Kubernetes native runtime that is designed specifically to work better in environments like microservices and functions. You can absolutely use it as a monolith as well but there are clear benefits when you use it compared to other runtimes on microservices or functions. Next slide please. So if you take a traditional Java runtime and you try to put it in a container it could look something like this and this is the problem and we like rabbits we don't like lions so but the problem here of course is that you can absolutely squeeze in a traditional Java application and application server or even a bootable traditional Java application into a container and make it run but it will behave a bit like this. It's dangerous to touch it, it's dangerous to take it down and reschedule it and this is exactly the type of behavior we want in Kubernetes. We want the Kubernetes cluster to be able to reschedule pods as we go along etc and and what we end up doing in this monolithic type of deployment is that we end up basically running it as a virtual machine and then we and that's what we want to avoid that doesn't give us the benefit of using it in Kubernetes. Next slide Clement. Yeah so the the hidden truth about this is that Java and especially some traditional Java runtimes has been designed to work on a container host directly and basically make sure it uses all the available capacity to run on and the problem is that that taking that to a container environment or to a public cloud means that you're going to have the increase of of resource usage so a framework that's been designed to to not potentially care too much about make it using a lot of memory and optimizing only for throughput compared to optimizing for throughput based on memory which is what corpus does. It will mean that it will cost more, it will cost more on CPU, it will cost more on memory and and and that could mean that your quotas actually explode and containers are really about sharing and deployment density and so that's what we want to get to. So next slide please. So I mentioned a bit of this corpus is a stack to write your application it's waiting for cloud native, it's waiting for microservices and serverless. So next slide. So this talk is really about modern application and modern data streaming and why why do we talk about modern data streaming? Well it's because in the in the new world we also want to have intelligent apps so when we're talking about intelligent apps we typically talking about software that are using some kind of artificial intelligence techniques to perform complex tasks that could be like predictions or automated operations or or we just just make make sure that everything is working correctly etc and and we in the iD industry we need to we need to be able to unlock because AI today has been very very focused on very close connected to the data so that's typically the data warehouse or the data lakes or where we keep the data. So to be able to actually make use of AI and build intelligent applications is what this talk is about and this is what we're going to show you how we can do this. So why are we focusing on this? Well one of the things is that most of the AR is written in Python and while Python is a really fantastic language to do some mathematics and statistical and other things in AI and there's a lot of AR frameworks available specifically for Python it is potentially not the the same type of applications that you want to use in your customer facing business applications. Also you probably have most customers like most people have already built up experience to write these kind of customer facing applications using Java so we really need to integrate these two worlds like the data scientists using Python to write applications that are potentially not full tolerant, potentially no performance etc and then we need to connect that with the applications that our customer facing has to be full tolerant, has to be performant and have to do that and and and also what we want to do is we want to continuously train those AI models so we might want to push data constantly to and have a continuous training of AI models so with near real-time data as well so that's the use case we're talking about here. We're going to focus on the first one which is the integration but before we run into the while we this talk in this demo I do have one more slide talk about so Red Hat OpenShift AI which is really the neighbor we're using part of that today and where we have deployed an AI service which is gonna analyze pictures for us and actually look for rabbits in there. What we're talking about here is that if there is a rabbit on a factory floor we know that's a problem so we want to avoid having Clemont's rabbits escaping out into the wild into the factory floors so if that happens we're going to raise an alarm so it's continuously watching and taking pictures and then if you find a rabbit in that picture it will immediately raise an alarm for us so we're using this on OpenShift and we integrated this with with Karkus services that we're gonna see later on so take it away Clemont. So we want to talk about data streaming and well data streaming is is a type of application that everybody understand it's it's super simple to see how it would behave you have data flowing from one place to another one and doing some processing in between and everything is fine except that while a lot of people are talking about data streaming when you start really thinking about it you see that it's not that simple there is some complexity around these kind of applications there is an architectural complexity because the patterns you are going to use are slightly different from the regular RPC or HTTP patterns you would have PubSub you would have a DLQ so it's different set of patterns the development models is going to be different and its models plural because the model you are going to use in that component here is likely going to be different from the one you are going to see here and we will see the difference it's also has some operational complexity how to keep things running what about memory what about lag is the data I'm processing really fresh or not what about my utilization what about my scaling so all these need to be answered when we provide a good data streaming solution so we could have talked only about Kafka however last year with Emmanuel Bernard I've did a Kafka deep dive which is available on YouTube it's three hours so be aware be aware of that it's coverables development and ops traps and patterns you need to use when you do Kafka so I recommend it if you are interested into developing Kafka application or running Kafka brokers I could have talked about Kafka integration in Quarkus but again last year I've done the same kind of presentation and it's still relevant it's still completely up to date so here today we wanted to show you something slightly different we want to talk about modern data streaming what do I mean by this so modern means cloud native Kubernetes native and part of that is that instead of having vertical scalability having multiple consumer inside a single app no I want to have replicas on my hop which will be my unit of scalability and I can go up and down like this as Thomas said it need to be reliable is those flow stops it's not only your application that is in trouble it's the rest of the pipelines that won't get data or that will start getting unfreshed data and things like that so it can be critical so we need to have something reliable we need to have a graceful remediation patterns means that the application will auto repair itself self-repair itself and obviously we are in 2023 it need to be secured from the beginning to the end that include data integrity authentications encryptions and so on what about data streaming well data streaming means that what we are going to manipulate are events those events might be sent into messages which is what we do in Kafka but the concept is events and we are going to manipulate unbounded streams of events we are not going to manipulate a collection of fixed size collection of events it's really something that can be literally be infinite and how you manipulate those infinite streams is slightly different because you are you can't really do accumulation across something infinite or you will have a memory issue we also need to have actionable analytics the data we want need to be fresh we need to monitor the lag we need to monitor all this and react according to that and this leads to the latest point adaptive scaling based on the application and its behavior based on the processing time on the lag on the arrival rates we want to scale up and on your components so let's go back to the basis at the core of modern data streaming you will have discrete events discrete events is an envelope with two main things inside first the metadata and then the payload or the data itself among the metadata the key is very very important the key is what will correlate events with other events so that's very important don't miss that one we also have the schema but the schema is optional a most of the usage we see are schema less while when you want to apply such kind of architecture a scale we recommend to use a schema which is the definition of the format of the data which means that everybody can understand what they are exchanging once you have the definition of discrete events well we have event streams which are an unbounded sequence of related event ordered by time the keyword here is related events and this relation is mapped by the key so the key inside the metadata is what we define two related events so in these pictures we have three events only having the same key forming an event stream the second part of that sentence is ordered by time meaning that when we process one event of that event stream it means that all the previous events have been ordered already so we need to be sure that we don't lose the order when we really want to keep it if we think about this and start going back to what we discussed about data streamings and how how it was simply data flowing across the system well we can already start doing a first step by having this kind of abstract architectures with four layers on the left side you have well the data generator data and event sources so it can be sensors can be analytics coming from application uh um browsers HTTP request whatever that Sydney is emitting events which will be then received and by the ingestion layer which will maybe do some very simple processing and then writing that down and initiating the real pipeline the real processing pipeline this goes to the second layer to the third layer which is a stream processing while at the ingestion layer we want to handle a lot of events at the stream processing layer we want to do that um in parallel using state full or state less uh processing we are really going to manipulate and reshape the streams so very different business logic and then once the stream processing completes it goes to the last day where we really see the data product where we can do analytics we can do data serving so providing the data to other services and rich data and data that adds add some values can be stored on a data lake or do such kind of things across this pipeline of course we should not forget that we need to handle the metadata so what kind of data do I have the schemas if you are using a schema full approach observability key for monitoring and understanding what's going on discoverability how I discover a new type of event how I discover uh such kind of things and obviously as I mentioned already security so what how do we implement such kind of application in quarkus well at the core we are going to use um development model which is named reactive messaging uh which is basically a set of annotations that let you implement event driven microservices in quarkus so these annotations are incoming ongoing there was a few others but basically it's five or six uh annotations relatively simple to understand and basically your application is going to be connected to some broker or to some event source we'll get the events through a connector and then you will have cdib which is a basic block in a quarkus application which will receive that data and process it and then send it to another channel and then another channel and then back to another connector that will write the data somewhere else can be a Kafka can be whatever so and those channels of communications are actually the name I have here so order aggregate or report and things like this okay um before going further we want to show you some demos to try to explain a little bit more what we are intended to to do here so our demos is is this one so you you can have the code here I will give again the URL at the end of this presentation so we have temperature sensors which provide well which produce temperatures which will be ingested and then return into Kafka topics on the other side we have snapshots like um cameras that will take pictures of our factory floors same thing take the pictures and then send it to karka those are measure are row low level and just contain the device id and the measure but nothing else the data enrichment layer is actually going to look for that device id and give a location like oh that temperature sensors is in stock or that temperature sensors is in berlin or something like this then once the data enrichment doesn't work by communicating with the database it sends two streams that will be handled by the alert manager the alert manager will look at the temperature and if the temperature is not in range send an alert which we will see in the dashboard same thing for the camera it will analyze all the pictures and rely on a picture analyzer service that will look at the pictures give us the set of objects that have been recognized and based on this uh decide if there is a rabbit or not on the pictures and we don't really don't want rabbits on the factory floor so let's have a look at how it looks like so it's deployed on open shifts here are the main components so we have Kafka my broker we have our device database measure enrichment service we have the alert manager the picture analyzers which is um uh using tensorflow in python i have multiple um instances because it needs to scales and then we have the dashboard but right now we have nothing we have don't have any sensors so if i go back here that refresh i hope it's working no oh yeah okay it's coming uh well we don't have any message we don't have anything everything is fine so let's start some messages so i'm going to go back to my id and just create some uh thermometers these thermometers will be pod if i go back here here we go we have many pods and all those thermometers are created here are just behaving correctly sending correct temperature no problem everything is fine and we should see the number of messages per second increasing and we will still it i think at the end it's around 200 uh a message per second something like that not too much but yeah and no problem everything is fine so what's happening here already is that we have we receive the temperature and we are going to look up in a database to associate that device id with the location and send another message and with with that location that what we have here this is my annotation i was mentioning this is because i'm reading in a database so i need to say that i want to run in a transaction and i will just look for the device id and send another message very simple now i can go back and say well i want to create bot some moment or something that sends data which is off again it will be here we recognize the icon so that the java icon so saying i'm not completely right i'm not quarkus so they cannot be completely right so um and then we go there oh we already have warnings so two warnings meaning that the temperature is off is slightly not good so how does that work well the temperature alert manager is getting our enriched temperatures and analyze them and if the temperature is not in a valid range it starts sending a temperature alert if it's in the range just well i skip the message everything is fine it means that this temperature alert is then received by the dashboard and displayed and yeah something is off but okay fine we can send someone to look at it second part of the demo is when we start having uh snapshots so i'm just going to create cameras they are going to be created the same way so right now everything is fine and the cameras are actually uh very simple they're just well it's our fakes uh and they just iterate over a set of images of five men in a factory so everything is fine um that one is fine it looks like a factory everything is fine kind of a factory or factory under construction is fine and a lab but okay fine everything is fine and so on so if we go back here and i start looking at here i see that i already have 32 image that has been processed uh but everything is fine those pictures are all good no problem so if we look at how this works i will go back to my uh snapshot analyzer it's slightly uh it's a bit different so instead of taking my events one by one this time i want to group them and do some batches and analysis uh images batch by batch and one of the logical way of doing this is say well i want all the pictures from uh berlin all the picture from Stockholm or all the picture from from paris which are the three locations we are managing today so how to do this well same annotations but here's the signature of the method is slightly different instead of having one event and returning one event i'm getting a stream a multi it's a stream so it's an embodied stream of data and then we return another stream and that stream is actually keyed by this key extractor that means that we don't get a simple stream we get a stream where in that stream all the messages have the same key the location so that means that this code will be run for berlin will be run for Stockholm and will be run for paris and uh if you have new location it will be run for another location so that's great but what do we do with this well then we analyze those snapshots and for this i'm going to invoke uh our prediction service our object analyzer the object analyzers is exposing a REST API or HTTP API which i invoked very easily using an quarkus REST service a REST client sorry uh where this is my location stock means that we are going to use service discovery to find it but no problem here and i'm just going to send the pictures and get the result back however well it's slightly slow so we need to be aware of that so i need to protect myself i need to be reliable so in my case i just want to do a retry i will retry twice uh i will add some uh uh and uh a call count exceed 20 seconds it's a call exceed 20 seconds something is off i can consider that the data is not fresh anymore and i need to uh react to that failure uh the rest of this code is just a response structure uh that we get back so everything is fine right now but let's see when i start bad cameras and those bad cameras are actually in addition to this one send this nice picture and these are very cute rabbits cute but not on the factory floor definitely not so when we recognize that there is a rabbit in the pictures we clearly want to have an alert thing something is off so let's see we go back here they have been created here and here we see some icons and we start oh we start seeing two warnings one for one for stoccon so let's try to see this what picture do we have in stoccon hope we don't have it berlin maybe yeah that rabbit has been found in berlin and that's not normal because in berlin we were expecting factory floors and this is not a factory floor so something is off we need to act and we need to send someone there and either hunt the rabbit or at least catch it maybe do some barbecue with it but at least uh something is wrong um i thought how do i have to show on the demo at tomas do you have anything no i think i think it could be worth if you if you click on observe as well yes and i need to go to the developer view yes here sorry and if you scroll down to the memory yeah that's what i have here yeah so so if you notice that the top three memory consumptions here are the python app containers that are running the tensorflow etc could be expected absolutely but but really what we want to think about here is is is the capability of running the data ngsj and working effectively with streaming etc if you look at the at the corpus one they're all running pretty pretty low here i think the top next one is kafka and then we have all the corpus one on on less than 500 megabytes of memory we could have gone further and compiled into native and we would have compressed that memory even more but but as we can see and then cpu is used wise i think we're 100 fine on on processing the incoming data the corpus and this is really the the the power of combining the two technologies here as well yeah because right now we're speaking about 220 message per second and some messages are pictures so they are they're limited to two megabytes because i would use them but it's big thing so it's not it's not simple uh hello comparison we're processing 86 images per minute per minute in that case in a per second so and it's uh but but that's why also the the the concepts of having back pressure or retries in this case is very very important for us to to to make sure that yeah we don't get get the that the use use of facing applications aren't affected by by the fact that potentially the the image analyzer might take a bit longer or queuing up etc i think what you said about back pressure is is key we have end to end back pressure on such kind of applications in the sense that our kafka connector understands how much things are going on and decide to pause the consumption or pause the production if something can't be processed later so we have some analysis and back pressure support like this and uh full tolerance like the retry and i will discuss that a little bit later um one thing i can do uh before leaving this is killing all my devices so we see it going back down why is that oh we should mention as well that the tool you're using called just it's it's just a command line tool kind of like like make if you're familiar with make so it's not something special it just executes the command that yeah it's just uh all my commands of my demos are inside just gonna file and i just click on the green and you know and just execute it so and we should see the number of message going down yes that's what we are all right so let's go back here again if you want to see the code uh you can go there you can deploy it you can run it uh everything is there and it's reproducible so uh very quickly um what we have seen we have seen data ingestion how it works uh the kind of things that we need to do so here the top concerns are protocol concurrency security and resilience so this lead to construction block which will be a synchronous emitter some kind of things that we see here where i want to send data or translate from one protocol to another one this type of code is very different from where we do cross the layer and we go to stream processing where here the top concerns are expressiveness elasticity and parallelization not concurrency parallelization how do i process more uh with more replicas of usually resilience and here the construction block are very different want to transform events reshape streams materialize you group by aggregate joints and things like that for example we have seen um how to look up in a database and uh transform an event or how do i manipulate the stream to group my events by uh bucket of five seconds or do such kind of things about concurrency and elasticity um well here my demos everything is relatively stable we didn't have the time to to really show you more but we are working on um on automatically scaling your components up and down and when i sit down it's down to zero no message nothing should be running so how it works is that we are going to analyze um the lag so we are already doing this we analyze the lag we analyze the our reevaluate to analyze uh the response uh yes the response time of the application um and we apply the queuing theory and we have two two well two strategy one which is let's say more uh phenops just saying that we are going to look at the utilization of your resources and as soon as you can save safely meaning kill one replicas and not overflowing the system you will do it and the other one is actually focusing on the freshness in the sense in the residence time so the time between the emission and the production of the outcome messages and in that case it's because we want fresh data and you may have to pay a little bit more to be sure that you have fresh data so that's the two strategies we are developing uh some one ones are still both a bit experimental um time of resilience so first we need to understand that all our application are storing our access to a data source which um uh can be used for materialized view t-balls or check pointing um we provide an spi it can use an in-memory data grid like and finishpan redis a database a Kafka compacted uh two peaks even if i don't recommend that but yeah we have that storage which is shared among my replicas then if something goes wrong we start with a local retry with our at retry notation and it will preserve ordering and retry with multiple times can wait a little bit because maybe it's a transient error it's a network issue or something like that after a few retries if it's still not successful it will go in a delayed retry topic meaning that the message that i've written here will be automatically re-engisted inside my stream after some delay so five minutes ten minutes and so on and if it fail again it will extend this delay and so on and so on so this is one of our graceful remediation patterns we re-engest the data of course this does not preserve the ordering but at least you don't block and you don't stop processing if at some point your application crash then we under all the uh our rebalance protocol which means that all the messages that has not been processed successfully will be moved or will be assigned to another replicas of your applications because the state of your application is stored in our data store the state of the processing is also restored when you do this and obviously this works when we scale down but it's also works when you scale up uh tomas let you conclude well actually i want to do a time check we we have some questions to answer maybe we should start with the questions here before we conclude so uh uh the first question we had was uh uh let's see what kind of databases used in the demo so as far as i know the database we've been using is postgres in this case but it doesn't really matter for us corpus supports using multiple uh very multiple variants of sql databases as well as no sql databases we could have been using something like uh in finispan or data grid to store this or redis or or something longer the the nice thing with using uh postgres is and and we have these two new sql databases as well but that is that we have this layer called panache so panache which is a layer on top of hibernate makes it extremely easy to do this so all we have is this basically this record are you looking at yeah i'm looking at it i can't find it i will find it uh yeah device entity yeah device entity so this is basically the device entity uh and and that that's the one that that's being stored in the database uh we should also mention that we are not really storing messages in databases here matches our part of kafka cluster and what we're using the database for is is to enrich the messages so we're only calling it the database to and reach and find for example in which location a certain device is being hosted uh another question was hello are you using corpus running with growl vm or jit we're actually using jit in this case uh corpus is very efficient of using one or jit meaning that we're only not on a gvm on a java container uh so corpus is very effective at running any other containers but as i mentioned before we could have actually compiled this to native and made it even smaller in memory etc but in this case we didn't really start to stop any services we didn't have a scaling example so we did also didn't think there was a really a need for using uh growl vm here and then then this is a question for you clemont uh uh do you know whether this technology can be used for some real-time processing right in between parenthesis responses below a certain limit of time so kafka is not going to help you with such kind of strict requirements uh kafka will store the messages and you will uh poll them uh but there is absolutely no guarantee that you are going to poll them within your limits so that's not the right technology for this there are other java technologies like um and i blank uh um uh from the london companies um uh well if you come back i would i would mention it just uh but there are other technologies about design for such kind of use case but and we should mention that in this particular use case we're focusing more on event-based type of architecture so devices are sending out events and reacting to events uh so and and and that's that's not really an architecture based for for real-time uh but it it all depends on priorities we could have prioritized certain messages over others for example to make sure that we have a response time within a limit um but um yes should we go back to the slides quickly yes sure i see manfred here now so we've seen out of time hi manfred uh yes please next slide so yes just to report to reiterate what we talked about in the demo here we had different data source events somewhat of a temperatures i would have liked to use barbecues here but actually it didn't fit our we didn't want a barbecue rabbit so so we decided to go with that with factory temperatures so here so different devices are sending in temperature data and events and and it's our responsibility to to do ingestion on those finding out where actually this this small event coming in just reporting the device id and the temperature now can be reached we can reach it with information on where this temp where this temperature device is actually located etc we can we can then stream process further etc so anything else you want i mean we talked all about this being able to do machine learning analytics on on on the streaming data processing coming in so we can have a lot of feedback looping as well like like if there are if learn from where we didn't find an alert but there still was an error so it could be that the temperature is slowly going up and we actually want to learn that or or or showing a sudden pad and not certain temperatures they may can have an analyst analyst about it saying that analytics about it saying oh you're probably gonna be over over a certain range in temperature for example yeah we we deliberately kept the code simple but you can you can do lots of things like detecting devices that are offline devices where you kept the latest value and you verify that the variation is not too large of such kind of things yeah when i bring up the next slide and we want to leave that slide up so no actually this is the next one do you want to talk about this one yeah well one of the thing is that caucus is based on a reactive core or technology or main event of any technologies named reactive messaging and one of the key aspect of that is the scalability and the elasticity it can bring so this is not numbers that from us this is number from the castle on a sport retailers chain and using reactive messaging in jvm mode so not in native with alpha cpu and 512 megabytes of memory deployed in open shift they were able to under one million of messages per minute per cpu per gigabytes of memory um this is true high density so that's the kind of thing too with with our technology you are going to have fun to develop it's easy the way of the live reload that we didn't show there the easy to deploy to open shift or cuban attest to the containers and so on but also very good performances and that's that's actually really really key here uh yeah i i can do that one to master up to you yeah and so i'm questioning this up here okay so the world ahead it's it's a modern that's the streaming is an initiative we are uh uh in right now and we want to continue that for one year to guess we don't know when we'll be done maybe we'll never be done uh but there is a few things we want to investigate uh further expressiveness we want uh a better way to express joy and windowing materialized views we have support for that but sometimes it feels complicated clunky and things like that protocols we just merge uh pulsar so if you don't want to use kafka but you want to use pulsars we now have a pulsar connector that can let you uh well you can use pulsar um mqtt we do have an mqtt connector but it does not support mqtt 5 that's one of the things we also have a rabbit mq connector which has nothing to do with my own rabbits um but it's been contributed by the community and right now we didn't really have time to well push it to the next lever that's something we are going to do too elasticity and resilience already explained our observability stories is is nice and but can be improved uh integration with keda is already there but the auto scaling operators which is named scarle is where we want to go because that's that will bring our solution to really to the next lever and finally integrating with others uh data science services uh and storage uh data lake data where else um query support is something we are debating away i'm not totally sure we need to have it but maybe the data lake or the data where else are better place for having such kind of things i think that's all we have if you're interested by the code again you have the url at the bottom or you can just scan the left barcode if you are interested by the slide it's a right by code and i want to thank you very much and do we have any other questions i think we have covered all all the questions from the chat um thanks again very much gentlemen um it was an excellent presentation excellent demo um if you want feel free to uh take a photo of the of the qr codes but don't worry you'll get all the slides and everything in a couple of days by email with that we conclude the second presentation of this session we'll now take a short break two minutes uh but then make sure to come back to the next presentation which will be again about kafka and some more ai ml topics but with that thank you very much and see you in two minutes