 Thank you all for coming. I know it's not easy last day of the conference after the party 9.30 any other Sunday I will be certainly still in bed So I will talk about Quarkus and Apache Kafka First maybe let me introduce myself briefly. So my name is Jacob Schultz. I work in Red Hat in the messaging team and I work now with messaging for I don't know 10 years and something and I'm Lately focusing mainly on Apache Kafka Contributor of Apache Kafka and I'm maintainer of the project called Stringsay, which is a CNCF project which focus on running Kafka on Kubernetes and Just to make it clear. I'm not really involved daily in working on Quarkus or anything like that I'm more on the other side Apache Kafka so There will be a lot of kind of the user perspective take of someone who works daily with Kafka and How does it look from the perspective and you start using Quarkus? So let me first Give a quick introduction to Apache Kafka. How many of you heard about Apache Kafka? And I hope to see all hands up because it's quite known open source project. How many of you are actually using it? Right, that's great. That's quite a lot of people as well. That's amazing. So At least a quick introduction It's open source project originally was created by LinkedIn It's now open source and belongs under the Apache software foundation as the name suggests and There are many definitions how you can define it one of them is that it's streaming platform or streaming data platform another one is that it's messaging broker Because yeah, it can work as a top some messaging broker Another one which I kind of like quite a lot is that is distributed commit log because that's actually what The Kafka really does internally right it gets some data and it commits them up ends them at the end of some journal lock so These are all the definitions But what I always like to point out that Kafka is more than just the broker It's really a ecosystem of things which Kind of wrap around the broker and that can be the consumer producer streams API which are part of Apache Kafka project itself there's something called mirror maker which is Used for mirroring data between data centers There's something called connect which we had to talk about Yesterday a bit more which is for integration between Kafka and different systems and then there's a lot of other third-party tools which have support for Kafka and it's like Pretty much everything in the streaming world and a lot of the stuff in the big data boards and machine learning and AI Has support for Kafka and interesting data from Kafka and so on there's a lot of different Java frameworks which have support for Kafka There's a lot of clients for different languages which are not Java and so on so there's a lot of Kind of things which live outside of the Apache Kafka project itself, which are part of the ecosystem now Kafka on its own has Two or let's say maybe three clients. There's the producer and consumer API, which are more or less the kind of traditional clients for producing or for consuming messages and The main advantage of them compared to all the other frameworks and clients is that these are really part of the Apache Kafka project And they are always up to date. They have always the latest features of the Kafka protocol which basically changed with every Kafka release and so on and But of course they do only Kafka and nothing else and then there's a lot of clients Which are either these clients or some other clients which are in some way integrated with all the other different Java frameworks like spring vertex aka or Quarkus would be another example So before we jump into Quarkus Let me give a short demo of how the actual consumers and producers look like in the with the Kafka clients and And before I do that, I will start my local Zookeeper and the Kafka servers so In a lot of demos you can see it done with Some Docker compose and so on I will actually just run it Locally, it's Quite easy if you download from Kafka dot Apache or the binaries you have there everything you need to get up and running and because it's In Java you actually don't need to worry about whether you are using Linux makebook or Windows so now I have Kafka up and running and Let's look at very simple producer so It's really a simple Java application which is a Just a main method nothing else really and what it does is on the beginning you always Configure these Properties, which is how you configure the client and there are some Helper classes to kind of make sure you don't have to remember all the different Configuration options, which are really just the string keys and you can instead use these helpers and then I for example say Okay, you should connect to the broker running on or cluster running a local host for 9092 The key and value serializers are Configuring the classes which tell Kafka how should the messages I will be sending be serialized into the actual Kafka messages Because in Kafka basically everything is a byte buffer. So or byte array So when I will be sending here some string messages They need to be somehow serialized into the byte arrays and then on the other end in the consumer we need to create from them a string again and Then With these properties, I really just create the Kafka producer instance And then this is really super sophisticated producer. Of course. So there is a while loop which Creates these producer records, which are the actual Messages because in Kafka the messages record record and so on and Here I specify the topic into which I want to send the message to be sent I specify a key which the message should have and I Specified the value of the message payload, which as you can see here is very sophisticated Jason creating by just concutting strings and That's it, and I can really just run it and it will send some messages and I can switch Back to the console and I can check that The messages are actually being received on the broker and you can see we are getting the messages with a key key and some Jason payload and It is really the same with the consumer Some of the things are different some of the configuration options But otherwise it is a small messaging client, so you call some pull method and That waits for some time to get some messages It gives you the messages or the records you then do some processing and so on so if you run it Will basically run as one would expect So that was Kind of a quick introduction Into how the Kafka clients work and during all the demos I always try to compare how the things will be done with the Kafka clients and how they are done in Clark was because in some cases That will be different Okay, now We are finally getting to parkus which might you are you came here? Actually, so how many of you know quarkus and I guess that should be also most of you Because I think they were like gasoline talks already at Defconn about quarkus. So I'm actually not sure that I copied the first line work by wars from the website Quarkus is a java stack designed for the cloud native age To be honest, don't ask me why one thing is a Java toolkit another thing is framework another thing is java stack But it's simply some set of libraries and plugins and so on which you can use to develop your Java applications and one of the great things is that is using this JDK hotspot compilers and this ground VM and so on which gives it some interesting features like small memory footprint and fast startup and response time and Well, I think is actually fairly useful and cool is the compilation into the native executables and Thanks to that you can have smaller container images which start faster and so on and that's all great and Parkus doesn't really write everything from scratch it really kind of Builds on top of other projects and components. So if you really dig inside you will find out that somewhere in there It's using the Kafka clients from the Apache Kafka project It's just kind of wrapping them into different layers and using them a bit differently than you would on its own And of course if you want to know more details you can go to the website They will find all the details and the fancy charts of how fast through this how less memory it needs and so on but The question is and to be honest because when you look At the page of Park was there's always or pretty much with any frame where always a lot of focus on HTTB based microservices and these things So why should you use Kafka from Park was why should you bother? Why should you not stay with the Kafka clients and I think there are several reasons for that So first of all if you are already using quite was for some of the HTTP services stuff or something else Kafka is great. Kafka is cool. Kafka is everywhere. So you should try it and otherwise one of the advantages of Park was is that it brings a lot of the different things together and what I showed with the examples of the simple Kafka clients It was not really realistic, right? It was a producer which kind of created the messages out of nowhere And send them to Kafka and then there was a consumer which consumed the messages from the broker and did nothing with them and That's not really realistic. So in reality you will for example have some rest API which Where the user interacts which triggers sending of some messages or we will have some processing of the messages Which you receive so you usually need to use more than just the Kafka clients And that's where something like Varkus gives you a big advantage because someone already put together all the different Integrations all the different libraries made sure they are working together. Make sure they are well integrated Whereas if I want to use the consumer or produce API directly and decide it I want to add some rest API. I will need to know which library I want to use for that I will need to somehow wire it together make sure the trading models work and so on then The speed memory footprint and the native builds That's of course Could be an advantage as well, but I will get to it Back in a moment There's also advantage that while the Kafka clients are very kind of imperative style clients With Varkus there's a lot of reactive stuff. So if you like more that style of programming then Varkus is definitely your thing and Yes, so I wanted to mention something specifically to the things like the fast boot times and The fast first response times and so on Sometimes these things So these things never hurt, right? It's always great when the application starts quickly But sometimes you need to think a bit of the context of what you are using it for With things like HTTP services There are probably a lot of use cases where in things like serverless and so on being able to spin some new Pots or images very quickly and have them be able to respond to the HTTP requests In few milliseconds. That's great, right? with Kafka consumer for example, it might not be Dead big advantage because in general if you know how Kafka works when the consumer connects to an existing consumer group There will be some rebalance happening. The rebalance will kind of pause all the other Consumers which are in the same group then it needs to decide what should be the new assignment of the partitions to the different instances Then it distributes this new assignment to the all of these consumers and then they can continue with the new instances and the new assignments and now if for whatever reason The assignments shuffled then maybe there is some cash, which was invalidated in the consumers And they need to re-launch some new data and so on so just by the nature of how Kafka works the Consumers usually tend to be more stable And you don't really scale up and down so widely in just a few seconds because that would actually create a lot more disruption than Then value so that's something that of course doesn't hurt But it's maybe less useful with Kafka consumers than with some other applications. So get something to keep in mind right, so Let's have a quick look at how we can use Quarkus with Kafka and If you want to start and there's this great code that Clark is that IO page Where you can kind of create your of your project So you can kind of just check the dependencies or the extensions which you want to add then you generate your application And it gives you zip file with the POM files and the directory structure and so on I'm not going to use it right now, but that's pretty much What I used to get this started here that generated This super nice POM file But right now I have really just a single dependency which is on the Quarkus smaller I reactive messaging client for Kafka, which is what I will be using here to produce the messages and If you already bought with Quarkus Before or solid somewhere else, you know that usually kind of the core of the things is The application of properties file Where you configure all the different stuff and what's important in this one is this Kafka section which I added Which will configure the Kafka producer and what's important here is that I will say okay I want to have some outgoing channel. So I outgoing meaning I want to produce messages I want to call this channel produced. This is really just a name so you can use your name You can have multiple of these for different connections And then I say, okay, this channel should be using the small right Kafka connector and then Interesting enough, these are the same option which as I had with the Kafka client itself Because somewhere deep underneath it. It's still the Kafka client. What is being used and what's a bit different is I have to Here specify that this channel should be producing to the topic called my topic So if you remember in the Kafka client before I specify the topic and creating the records here I configure it in the Properties file and otherwise, it's really the same we connect to the local host 9092. We use the string serializers and that pretty much configures the whole thing and The actual producer class is Quite simple as well. So we have really just one method which is annotated with this outgoing annotation Which says okay, this method is producing a messages for the channel outgoing channel called produced and then we really just generate the messages Just each second we send a new message which looks exactly the same as it looked Before so it has a key called key and then it has some kind of JSON with a timestamp What you can also do here So if the key is not important for your messages, you can really just change the whole signature and really just return a string and The small iractive messaging will automatically make sure that it will send the Kafka message Which will have your string in the body and the key will be null basically. So if you don't care about key Then you can really produce just the string or integer or double or whatever objective on to produce Key is quite important with Kafka. So that's why I have here an example, which includes the key so and now when I go to Back to the command line I can really just Start a producer So I can use this maven compiler Park was deaf which will compile it and start it in dev mode and When I switch back here, we can see that It's now sending new messages, which actually look the same as the messages before but they are now sent from Parkus and I Have your consumer as well, which looks very similar. So the The pump file is exactly the same. It has just different name of the project What changed here in the application properties file is It used to be outgoing produced now. It's incoming consumed But otherwise the config is again Pretty much copy of the one we had in the consumer with the consumer API. We have to specify the deserializers we have to specify things like Autocomade group ID and so on but I guess you get the idea and The consumer Is again super simple you just use the incoming annotation to say okay This method should be processing the incoming messages from the consume channel And then this method is always called when a message is received the message is passed as the parameter and then we just look the message and acknowledge it Again, it's simple application, which doesn't do any sophisticated processing But what's useful is that the Kafka message Whereas here this instance of the Kafka message is really Parkus or small-rise internal Kafka message It's not something what's coming from the original Kafka client It has all the information you might need like the topic the partition I think somewhere there was offset to which the message was sent as well and so on so we can look all of these and Again running it is Super simple if I don't do any typos of course and you can see that we are receiving the Messages now so that was the first very simple example, but Most probably we want something more sophisticated. So let's see for example how we can add support for SSL encryption into this and just to save myself from some embracing typos When writing the code, I will just hate to help myself by checking out the right part of the code and again just a quick look at how this would be done with the Kafka clients All what's really changed here is we added four more options to the properties So we need to specify because we are using TLS just for encryption we need to specify the trust store and the trust or password and We need to say that the security protocol should now be SSL and Let's say that we want to do the TLS hostname verification and In my broker, which I'm running locally. I use the port 9093 for TLS So I change the port as well and that's it. You don't need to do any changes and It's actually exactly the same with the Quarkus stuff so the only thing which really changed is The application properties where we edit The new options. So we have the same options trust or trust or password. You should be careful I think in the last version of Quarkus. There's a Kind of the way the parsing of the configuration file works if your password for the trust store is only numbers But it'll be interpreted as integer and the whole thing will crash quite badly So you need to have some letters as well in there but that's really all the change I need and I also change the port and That gets me up and running With TLS so when I start it I can see again that It's now producing the messages and it's now using encryption and if you would want to do something like Like TLS authentication That's as simple as the encryption itself. You would just need to add keystore as well in the current version. You can of course use Sussell with Kafka as well, but there's one exception It doesn't currently compile into the native image so if you sussell you need to use the jars and the JVM instead of a native image and Talking about native image When you use just Kafka there's this thing That when it compiles The native image of the executable it will by default have TLS disabled unless you specify this Quarkus SSL native option so let's give it a try and let's Compile it into the native executable This is of course compiling an executable for Mac OS because I'm running on Mac When you want To build it for example for Linux while using Mac you can of course use this builder image as with any other Quarkus application and Get some executable which would be native native and which would for example run easily in Kubernetes or OpenShift now Quarkus is fast in many different ways But the native compilation isn't exactly fast, so it takes around minutes to compile a simple application like this and What we get out of it in this case will be Application which has work so that's now finished. So let's try to run it and what we get is this nice exception that TLS SSL context is not available Because Quarkus tries to make everything as small and as fast as possible, so it's not included by default I think that if you are using HTTPS somewhere with the Quarkus clients, it gets enabled automatically But it's not enabled automatically for Kafka. So you need to add this option Sorry, let's switch to the presentation mode. So you need to add this Quarkus SSL native true option and now let's try to compile it again and Now because we enabled it, it should package all the TLS stuff into native executable and everything should run smoothly So it's really just inconvenience, especially if you come to it for the first time and run into it Once you know about it, you get quite easily used to enabling it manually So now we are almost finished So we are done. You can see that with the TLS it actually took a bit longer And we can run it and you can see that now the TLS works it connected Fine, and it's I actually stopped the receiver here. I started again. We should see that we are getting the messages so now what else Could we do to the client which we have so If you remember we had there these super ugly We had there these super ugly things Where when I was creating the JSON, I was really just writing it as a string. So Let's see how we can do it with some proper serialization and deserialization support so what I added here is this Super sophisticated class my time, which is really just one field which is called time which contains a time right or timestamp and Yeah, that's really simple and I also Created so this is the original Kafka client also created these serializer and deserializer classes which So with Kafka in most cases you would use Jackson for a serialization and deserialization into something like JSON so it's really just using the object mapper to deserialize the values from bytes into the my time class or The serializer would go other way around and Then when producing you have basically two options You can either set the class here as the value serializer or Because this configuration might be shared with multiple copies receiving different messages with different types Then you can actually also when creating the producer You can just change the signature to be stream my time instead of string string and then specify the serializer here and exactly the same would be with the deserializer and then you really just create the object and That's it. So that's not complicated But it's actually a bit easier than this with quarkus. So let's have a look at the producer and I have here the same classes the same my time class But I actually for the producer I do not need a serializer So a quarkus as this Jason the library or extension which Actually is able to serialize any class so I really need to specify in this properties file that the Value serializer should be using this Jason the serializer But you should be careful about if you are using some custom serializers or be serialized for the rest apis in quarkus This is actually a different class, which is the same name So it's slightly different class for the Kafka client than it is for the rest apis. So that's just a warning and That's again It anything else what we change is in the producer class There we simply use the my time class and my time object instead of the string and When we go and run it then Obviously, I tried it and it should work and similarly for the consumer Here I actually needed to add a DC realizer which specifically kind of extends the json be the serializer and Tell us that it should be using the my time class and In the application properties. I again Had to specify that this should be the DC realizer which is used So for the consumer I actually had to write some additional code, but Otherwise the consumer code itself is again, we just take the messages string my time instead of string string and That's it and we can run this one as well. They should not reuse the wrong commands so Here you can see that it's receiving the messages. It's using the two string to kind of log it So that looks now a bit different, but it's coming from the same message. So this is something what You would normally prefer to working with instead of putting the json together in string so this is again very simple and What I wanted to show here as well is So as I mentioned on the beginning there is this Streams API in Kafka as well, which is actually doing stream processing so it's kind of combination of consumer and producer and For that we add Kafka streams Extensions to the dependencies and then The configuration is a bit different because most of the configuration is under this Quarkus Kafka streams so there are like four options which can be changed at runtime and Then some additional options can be specified as Kafka streams dot and the key of the Kafka configuration and Then in the actual stream processing class What we do here is We do a stream processing so We basically tell the Kafka streams library to start reading messages from the topic my topic Which is very we're sending the times before and we say okay we want to consume it as a stream for the key and my time for the payload so the Consumers and producers they are using just serializer and deserializer the Kafka streams API is always using something called Serde as in serializer deserializer Which is always kind of a pair of the serializer and deserializer So I have to specify them here and then I in this case I do really a simple transformation Where I take the my time object and create my date object from it for example here by Removing the time part and then I just send it into a different topic So this looks fairly simple and it actually works very simple But it can do quite a lot of powerful things So it can do also some stateful processing like aggregations. It can do joints of different topics and When it needs it is kind of storing the data in Some Topics creating the Kafka broker as a kind of intermediary topics. So You can do quite a lot of stuff with it. Oh, that is now running and hopefully soon we should start receiving something Oh, we are not receiving something nothing anything because we are not sending anything right So let's start sending something again. And now we are getting these kind of date messages now This is great, but So how many of you saw in some other park who stalks a lot around is live reload of the applications and so on That's something what is often shown to the demos and it works great with Some simple things, but when you use it with something like Kafka, it's not always so easy because Reloading something like the Kafka streams client takes some time It's not just the compilation the startup time. It needs to connect to the TCP server It with TCP to the Kafka server. It needs to initialize itself and so on So let's see if we can change something and for example sent just the day part of the date instead of the Whole year month date and We should see here that it's actually now restarting and reloading the client and we are now getting the messages with the date So it wasn't as Slow, but When you do this with HTTP stuff, it's usually a bit faster, I would say So that's the Kafka streams part and Wanted to show one more thing so Up till now basically the applications were really living They were not that realistic So they were just randomly creating messages and sending them in the producer case So now I actually edit a rest API and integrated it with it So into the palm file I edit the dependencies for the car whose Rest easy and park with Jason B and so on and then I Have Additionally resource class and what I do here is I have a post and post endpoint here which You can use to kind of set the time and that's using this emitter pattern. So with this emitter Field we basically inject an emitter which creates a new channel, which is called time from post and This is a really the whole part which does the rest stuff and Then in the actual producer class all I need to do is To say okay, this method should be called when some message is sent to the time from post channel and Produce the outgoing Kafka messages to the produce channel. So this method will now get triggered every time HEP post is done on the post endpoint. It will send the Kafka message So let's try how that Works Awaited the consumer the producer and let's start the consumer here for the Original topic and let's use some terminal if you didn't use it yet and let's send this So I will now send to the HEP endpoint is Jason with some timestamp we sent it and here we can see that the Park was producer received it passed it to the Kafka producer that sent the Kafka message and The Kafka message was printed here And now we can do exactly the same with the consumer when I find the right window and So again in the phone file I did exactly the same changes I added the quark was rest easy extension for the rest API and I Have here again a similar HTTP endpoint but this time it's working on get and it will basically be streaming all the messages which it receives to the HTTP clients as Jason. So Again, we have here these these times property which is using the publisher and It's basically reading from the internal channel Called new time. So these internal channels, which I'm using here to communicate These are not Kafka topics. These are really just within the park whose application and Whenever something gets published to this publisher, it will get sent to the client and I again had to change a bit the consumer class as well and Now it's still consuming the messages from the Kafka topic But when it receives the message, it will basically take my time payload and send it into the new times Channel. So let's get this started and I have here simple HTML page which Is showing me what the last time is or last known time is so we didn't send any messages now So it says nothing, but let's go to the terminal. Let's send Some time And we should see that it's now listed in the In the web page and Let's move the date and send it again just to see that it's really Automatically updating. Let's move one year further to the DEF CON 2021 send another one and It was updated. So it's With Park was it super easy to kind of do this integration with rest if I would try to do it with the Regular Kafka clients. I would need to add some additional libraries. I would need to wire it everything I need to do is really to just change some annotations and the whole thing is done for me which Yeah, it's great because I normally hate doing Rest API's and this makes it really easy So I think that was most of the stuff for this part of the demo. I wanted to show one more thing which is the dynamic configuration so As you maybe noticed all of the configuration was done in application properties file, which Is really always somewhere in the source main resources and when I build the application It gets basically baked into the jar or baked into the native image And then you run in some environments like Kubernetes and so on That kind of sucks right or it sucks even without it because you don't really want to build your application with a hard Coded configuration and then ship it somewhere to production and have everywhere the same address for the Kafka server and so on So you need to be able to change these things dynamically, right and Park who's actually knows about it, of course and you can So it doesn't work for all fields because if it would work for all fields that would kind of slow the things down and so on But for the most important things You can use either a configuration file provided at runtime to override them or you can use environment variables to override these options and Actually for Kafka for the Kafka client, I think it works for almost everything the Kafka streams Extension that's kind of a bit how finished so you can override the things like the topics the application idea and the bootstrap server, but you cannot override things like SSL for example, which is a problem and we need to make sure that's edit there as well, but in general for the Kafka client itself It's quite easy to run it even in some environments like Kubernetes so Because I work on the stroomzy project I Used it to spin the Kafka cluster inside of open shift Which is kind of one of the environments where typically you would configure the applications running in the containers using environment variables, for example So that's why I have here this deployment which will show a simple consumer and producer and it will pass the credentials for the TLS authentication through environment variables, so Streams is an operator. So it has custom resources for all the different Kafka stuff. So we have here this No, I screwed the terminal So we have here this Kafka user resource which will create the Kafka user including some ACL rules and so on and we have always one for the producer and one for the consumer and In the same way I create a topic which I will use to send and receive the messages and Then what's more in most interesting a plan to show is the actual deployment So Here you can see that I Use some image Quarkus Kafka producer. It's very similar to the code which I shown here. It's not the same because I Didn't really wanted to build the image here and push it somewhere to docker happen. So on Let's ask him for troubles. But what's important are these environment variables here Where I say okay the security protocol should be SSL the Type of the trust store is PKCS 12 trust our location is ETC slash trust or slash CA P 12 and So when I specify the password the Certificate here is actually mounted from a secret as a volume and if you notice then these values These are basically the same as we had in the application properties file They are just uppercase you replace dots with underscores and things like that but They follow the same naming scheme and it's exactly the same when I scroll down to the consumer So all I need to do now is OC Apply and it should be OC not OP and when I switch to my Open ship console It's now creating the images so we can see at the producer and consumer running So these images are actually using the native built and are quite small and trust me if this would be the regular images to JDK to take longer to pull them and start them and The lock is actually quite small because I zoomed it But you can see the same principle. We are receiving the messages now in the consumer with the producer is producing it's again using Jason and Yeah, it's using the dynamic configuration here so you can see the Environment variable specified here the password mapped from the secret and so on. So that's the way how you can Dynamically configure the client and of course doesn't have to be always an open sheet. It can be on VMs or everywhere as well and And That's pretty much for the talk here are some Resources that you can find some more information There's a lot of things which are documented in the Parties documentation. It's fairly good but some of these things are quite simple and are kind of split across multiple different Guides and so on so This showed something slightly more complicated and sophisticated So that's it for my talk and now if you have any questions Feel free to ask So the question was whether there's support for up your registry in Quarkus and I think if you run it in the JDM mode It should work without any problems, but there's no kind of special support. So It would kind of work isn't as in any other Kafka client and I don't think it's added as an extension. So it might not work in the in the native image build, but actually up your registry itself as far as I know is written in Quarkus As well and hopefully we will add that as an extension in this Any other questions Yeah Sorry So the question was if I have the Cold in Github. I have it in Github, but it's not public yet But I will add the link to the slides if you want and then I will upload the slides into the DevCon page so you can all if you give me Till today evening or something like that. You should be all able to find it there. I'm not sure So you mean like using Quarkus in Kafka connect for the transformations Or yes, so I don't think it's so easy because the connect transformations. They are done Within the Kafka connect itself, right? They are not run as separate executable. So I think to get some Benefits out of this you would kind of need to build the whole Kafka connect and Kafka broker with Quarkus which I think Could be cool for a lot of things like having a native image for the Kafka broker and Kafka connect and so on But I think that would be fairly a lot of work to do that Might be a bit easier for connect but for example the broker which is written still mostly in Scala and so on it might have all kind of complications so Yeah, I don't think the transformation itself make much sense The connectors of all that might maybe have some benefits in terms of the speed and memory footprint and so on Any other question? To be honest, that would be more question for someone who knows Quarkus a bit more deeply I'm more from the Kafka side. I think you can do some stuff with Kotlin, but I don't think you can do stuff with Scala And to be honest and open I Do not want the Scala extension for anything But it's just my kind of own opinion Okay, anything else if not then thanks a lot for coming and I hope it was useful for you