 So hello, so and I hope you enjoyed the previous talk and also the sponsor videos and Next we will have another very interesting talk. That is by dr. Chris Tov Zimmerman. So let's welcome him on stage Just be with him in a minute. Sorry. Oh Yeah, some technical difficulties, I guess that's fine. So yeah And what do you like most about your python? Actually, so it's the second year that we do this online and so this year we're trying to use a lot of different tools We have matrix and also we have wonder me for the hallway. So make sure you go check that out because Yesterday in the lightning talk, we play a little bit with wonder me and people chasing each other. It was super fun so we had go there and meet other conference goers and of course if you have questions for any of the speaker you can post in the The speaking, you know, the the chat in matrix. So Yeah, so I think uh, yes Welcome back So Hello, yeah with me. I just have to get this thing up and running and here we go I hope everybody can see me at my screen Yes, I think we can see your screen at the backstage and when you're ready our technician will put that on stage So don't worry about it. Excellent. I was live already. Sorry about this That's okay. So, uh, where are you calling from? Frankfurt Frankfurt. Oh, wow I always want to go but I missed the chance to go. I think because yeah, you know, all the can travel was cancelled Yeah, but I'm sure that would would go when I have a chance so Yes, uh, when so, yeah, I can see that you are sharing this screen Yeah, and if you can go to your slides and then we can start absolutely Here we go. Here we go. Take us away Thank you First of all, thank you very much for taking the time and telling my talk Today, I'm going to talk about something called redis and redis gears um Which is an essentially a no-SQL database and extendable no a multimodal no-SQL database and hence the title of the presentation adventures in real time python no-SQL star but first of all Who am I? I'm attacking through and through I did my phd on reflective operating system architecture 35 years ago and this is actually that was the point in time when I discovered a an operating system called linux Which has been with me for that amount of time ever since I started With linux when the kernel was still at point nine five in the overall history of linux and As I said Linux has been has been with me for the last almost 30 years now I'm also the tech support at a local linux user group and a little bit more than that Um, essentially the frau look is one of the bigger linux user groups in germany. It's located in frankfurt And I'm also an arch package maintainer if I have the time and other hobbies include Anything around the software development lifecycle IT security and other forms of lack of arch which I really enjoy both on the forensic as well as kind of offset Site and if there's still time left I'm a community liaison and solution architect for a company called redis labs, which is essentially The home of redis but more about this in a minute And if there's still some time left I'll also do a podcast and apologies about the typo The url is really linux in loss without the hyphen.eu And this is a podcast that I've been running with a guy called martin viscer for the last One and a half years. It's a podcast about Open source topics anything goes software The ecosystem and of course the whole shenanigans around this I would like to dedicate this talk to My son luka and of course the cto team at a company called redis labs Because they are in charge of the innovation that also drives the majority of the modules that surround redis But first of all Let's see what I what I'm going to present in the next 45 minutes ish I'm going to give a short overview of what redis really is I'm going to go through the multi model at database aspects Because redis is just not a straightforward no sequel database, but rather extensible So you can add different application specific Aspects on top of redis A special modules is called redis gears that allows the Shipping among other things the shipping of python code to the server side, but more on this later And if there's still time left, I'm going to show this using a demonstration in the area of real-time prediction At machine learning and then it's straight to the wrap up and the question and answers So let's start with redis. What is redis about 10 years ago max at 11 actually a guy called salvatore san filippo of italian origin Had the requirement for a large web reporting project that he kind of was thinking at the time To come up with an ultra performant database that was able to provide real-time semantics in terms of Not exhibiting the latency the delays that typical database systems bring with them because one feature That they pretty much shared at the time and still do to some extent today is actually they store memory They store data on on persistent storage like disks although things have improved since then with regards to this access times the difference is still visible between between Storing data on disk and just storing in And data just storing in in main memory and that's precisely the aim of redis Rather than storing data on disk The main processing of data is is actually done in In memory and hence this term in in memory open source database Because the idea is to have the focus of on the processing of data in memory Meaning that yes, you can have persistence As an option in your race instance, but at the end of the day um The data is stored in main memory, which brings ultra fast access times in addition to Kind of high performance as an high throughput and low latency when you access your your data um Salvatore looked at other technologies like how to speed up postgres how to speed up mysql also looked at something called memcash d which was a key value store at the time, but given the fact that the um That the functionality offered by especially memcash d wasn't sufficient for his purposes He basically decided to sit down and write his own Implementation of a an in memory no secret database So some numbers redis is probably the Data base with the most clients at implementations meaning you get About a hundred uh about 160 for clients in 48 languages Um almost 50 now Um the slide is the slide is is a little bit older The thing is basically that with any program language and Anything goes here ranging from the more popular program languages like python Java c sharp c plus plus and all the rest of them right up to more esoteric ones like I'm almost tempted to say rust haskell or or ellipse or lisp Chances are that there is at least one if not more client side implementations as in library implementations for that language For example, python alone has about four of them Same goes for java. So depending on the requirements because most Because all of these clients and implementations offer different functionality in terms of what they what they offer With regards to features and functions Also, they kind of the the api comprehensiveness and that sort of thing So the idea is basically that with most of the languages you can pick and choose um of your The the the the clients that are efficient that that most that most that that fits your needs ideally Let's put it this way because redis is an open source project The server code base is licensed under under a three clause bsd license. You have an active community So we have more than 11 000 commits These days nearly totaling uh 12 000 with more than with Almost 500 contributors contributing to the code base One of them is actually talking to you right now because after I joined redis labs a couple of years back I started to extend the standard python client with streams functionality Streams are essentially an in-memory message bus if you will In functionality somewhat comparable to other approaches like like kavka for example The project has about 50 000 github stars That gives you a little bit of an impression of how popular the code base has grown over the last A few years in addition to the client side implementations you have more than a hundred Higher level libraries and tools using these clients at implementations again Another indication of how popular The code base has grown with the community Because it's not just server torus and felipo Contributing to the project As a matter of fact he gay he transferred the stewardship to a steering committee as of last year Now there's a there's a couple of people instead him running the project But the community is quite large and this is actually one of the benefits Of redis because you get contributors from all walks of life and hence the innovation That comes from this diversity if you will is quite comprehensive because the more people You add to an open source project coming from different backgrounds having different kind of requirements on the code base and all the rest of it the more Food for thought these people can provide goes without saying this is also reflected in the 60 k stack overflow questions speaking of stack overflow Redis has been ranked the most loved database on this particular platform the most loved no-secule database I'm tempted to add on this platform for the last I think four years in a row Again, that gives you some some Impression of what we're looking at funny enough After I joined redis labs and this is what I stand off still do once a year or something like that I use the github api to Extract project requirements and This is basically when I take a look at the output of this data I see red is being used in antivirus scanners mail systems Just to name a few use cases Mostly in the area of caching because that's exactly where redis comes from Because this is a hard most of the time a hard real-time requirement Taking a further look at the redis ecosystem Just to kind of give you some overview The redis data structures are depicted on the left hand side of the slide For those of you who haven't been exposed to redis or just this is just a short overview So the data structures range from simple strings as and Well, the notion of a key value store where where redis originally came from But over the years have developed into into quite a comprehensive and powerful Set of supported data structures for example redis also supports bit maps and bit fields It supports hashes hashes are a set of structures stored under a given key name It supports lists comparable to the lists you have in python or Or java to name just tutu programming languages You also have the support for sets and sorted sets the difference being set as we know it from math 101 It's just a collection of item without duplicates But the sorted set is a little bit different because the sorted set actually Contains a score for each and every item that gives a natural ordering Of these items in the set that means the implementation of recommendation engines or leader boards That typically carries that typically carry some sort of sequence is easy With that with that particular data type Geospatial also reflects the the x y as in longer term the longitude and latitude Data data types with regards to storing Geographic indexes. Let's put it this way. So typically use case you are in cologne as in city center right next to the dome and want to know How many drug stores there are in the far of my radios? Is essentially you then basically take a look at the map you have the Locations of the drug stores surrounding in this instance and then you simply ask redis Now give me all these points in a far from our radius and then redis gives you back the exact locations Hyper log log is an example for a probabilistic data structure of For those of you who are not familiar with the concept the idea is to store Values or not the exact not the payload of a value, but rather the The representation of the value In memory rather than the value itself that may sound a little bit different Strange at first sight, but the idea is essentially Especially if you have large data sets To only start and this is how they all work these probabilistic data structures to to just all the hashes In in main memory instead of the items themselves that means that you get a Probable result of an operation on on a probabilistic data structure Whereas if you would store the exact item in the memory, of course, you would have the probability equal one The lower probability Results of a potential hash collision Between the hashes for different items in some rare in some rare cases The items Different items use the same hash value and this is also known as a hash collision So typical examples for hyperlog for for for probabilistic structures are hyperlog log and something called bloom filters or cocoon filters One of them is implemented as a native data structure namely hyperlog log Which essentially gives you the The number of elements in a given set without having to store The elements themselves in main memory comes in handy for example If you're looking at counter-distribution networks, if they just want to do a quick check Um, if something is in the local cash or not, they they basically take it can take a look at the hash representation a similar bloom filters actually um Give you the um a similar probabilistic data structure um Streams essentially as I already mentioned Give you a performance memory a performant memory queue in terms of a queuing system that Essentially stores all its values in main memory rather rather than on this like things like Kafka for example would do um In addition to these native native redis data structures, you have so-called redis modules The difference being that these redis modules are extensions Of the native server side sometimes the data structures as depicted on the left hand side of the slide Do not satisfy application requirements meaning That um For example, you want to do a graph algorithm um If you would just use the data structures on the on the on the right on the left hand side You would have to simulate a graph using these native data structures As you can imagine this can become a sum because you have to do on the client side That means additional code and of course more round trips to the to the server side Because essentially you have to emulate the graph using the native natively available data structures In addition to this these redis modules implement more application specific data types Just to name a few of them. I'm going to showcase two of of of the other modules on on on later slides Just let's let's go through them kind of quickly The idea behind this is for example, if you want to do a graph algorithm Um Rather than do emulating this on the client side You simply load the pre-compiled redis graph module onto the database And then you have an open-server compliant graph database at your disposal So similar to neo4j and other open-server compatible databases. You basically take these Take these data structures and model your graph like you would do with an ordinary graph database like neo4j tiger graph Arango and all the rest of them the difference between these native For wonderful better expression graph database and redis with the module loaded is quite straightforward With the modules you actually get the benefits of native redis right out of the box So in in contrast to other graph databases The graphs are still processed in main memory again yielding ultra or low latency And ultra higher performance in terms of throughput So let's go through these modules a little bit and a little bit more detail Jason essentially is a document database extension that is as a module in functionality comparable to say MongoDB or couch base search is a full-text search engine somewhat comparable to other popular open source systems like elastic search Time series in there. I will go into a little bit more detail later on And yes, I've already kind of touched upon Now commercial break. I'm working for a company called redis labs So if the redis labs and I would like to keep this commercial break to an absolute minimum Office does does offer a an enterprise version of the software redis labs is an open core company That offers a set of additional features depicted in the lower third of the slide Like security features extended Persistence like a tip memory approach But I won't go into the details because as I said, this is not a presentation about redis enterprise But rather redis itself if you're interested in the features just check out redis labs.com So with that actually we are already entering the multi-model database aspects So how are the modules architected in principle? You have the server side as I already explained Where um, you would have the different modules loaded, sorry into your database like Jay like like redis jason like redis time series and redis gears and then essentially um, you would have then on the Client side you would have a tiered or a layered approach to application functionality that an application can use in order to avail of the functionality Contained on the server side at the very bottom. You have something a kind of very a minimal layer called hi redis hi redis essentially is only a rapper around certain operating system provided functionality like sockets because you are Communicating with the server side ii via TCP sockets or unix domain sockets Should you be on the same machine as the server side? Which of course has certain benefits in terms of performance because essentially you're circumventing the full tcp stack And are just basically are using memory buffers between the client and the server side implementation Um, also hi redis implant implements a simple memory allocator for Yeah, for memory management on the on the on the on the on the client side language specific bindings essentially tie high redis to a particular programming language Every programming language is different. For example, do you have programming languages like python like java? Where garbage collection is part of the language definition In contrast to this you have programming language like c or c plus plus Or us which is something in between when it comes down to memory management that do not Manage memory for you. This is the reason why There and and of course the semantics between between different programming languages are also kind of different This is the reason why the language specific binding layer abstracts away these differences so To the server side It basically all looks the same Uh, among other things what the language specific bindings do is they um take the language specific api calls and wrap them into a wire protocol, which is called resp in this case Um that contain the payload and the metadata in terms of what what the The the exact command that should be invoked on the server side On top of the language specific bindings you have module specific bindings that abstract away Um the uh the particular implementation of the module on the client side So each and every module has a set of standard um client-side implementations for example most modules would implement at least java and python as a standard library, but um the implementations are extended are being extended all the time so chances are that Uh, if you're looking for example for a haskell or a a rust api For your patio for for your favorite module chances are There is a there's a there's a correspondent. There's a corresponding module specific binding already out there in the ecosystem Based on the particular application requirement and an application is free to choose the specific level Uh that it uses to access the server side Of course, it can rely on the language specific bindings thus kind of emulating module functionality itself Um as we will see shortly each and every module is represented by a set of a set of specific commands That is used to invoke its functionality typically beginning with a prefix Um, but as I said if there is a client-side implementation available for the module A module actually can also use this Um, it depends really on what the module needs Uh, what sorry what the application needs in terms of the abstraction layer or the abstraction level how it wants to access redis On with the next slide two modules example to two examples for modules time series displayed on this particular slide and Redis AI Which i'm going to go into a minute Redis time series is in functionality Comparable to something called influx cb or other time series databases as they do exist in the open source and closed source domain So, um each and every oh, sorry I should probably explain what the time series is A time series is essentially a stream of data that has a time stamp attached it Typical use cases as shown on the slide would for example include any internet or thing things Or monitoring or filtering of data Imagine your typical power plant A power plant typically has a reactor if it's a nuclear power plant or or some sort of energies or some sort of Gadget that takes fuel of some sort Um and transform this and and transform this into power. So typically a power plant has a turbine But also a power plant typically has a transformer a transformer and all the rest of these Uh, the these gadgets so all of these components would have sensors attached to them That measure their particular state of maintenance They're operating parameters like temperature pressure in terms of in terms of a turbine all the rest of it um and because you want to know what's going on at particular point in time typically these sensors would also In addition to the data a time series database would also capture the point in time when a sensor measurement would be taken uh meaning that you um Have not only the data itself, but also the time when this data was being recorded Which is important because that allows you for example to spot trends if um The temperature of a turbine is increasing steadily over a certain amount of time It's almost certain that there's something wrong with the turbine because if that wasn't the case Uh, the cooling make the cooling system would be doing its job But if the temperature is increasing that there's something wrong with the cooling system around the temperature It's that sort of thing that allows you to actually spot or identify trends and becomes vital If you're looking at monitoring mission critical systems um some capabilities of the modules of this particular module for example And this is not really different from any other time series database. It allows down sampling or compaction of data It allows it to further index time series data And of course you can query the whole thing But if you are for are just Are just interested in in a summary of the data over time you would typically use something called aggregation like averaging um Identifying the standard deviation of particular samples and all the rest of it um Important as I said if you want to kind of take a look at the at the snapshot at the development of time series data over time it also Supports compression for example delta and double data encoding and of course it's integrated already with um Something called grafana and prometheus typically standard components these days That actually give you a nice not only graphical representation of time series data But also allow you to further Provide functionality like drilling down on time series data and all the rest of it um And as I already kind of touched upon use cases include monitoring filter anything regarding to iot because this is Probably the most used use case these days for time series data Next module is actually something called redis ai redis ai of course running for redis and artificial intelligence um it's The answer let's put it this way to current challenges in the deep learning Our machine learning area today because if you take a look at Typical um Implementations of back propagation networks like the likes of tens of flow the likes of torch and all the rest of it They have one particular challenge i.e. They um Even between their kind of different layers and back probation networks typically consist of layers of simulated neurons That's the way these things work in terms of um the actual implementation of deep of a deep learning algorithm um these These um approaches to back propagation networks typically have to rely on persistent data um as in data stored on disk even for the intra layer Our computations so the way back propagation networks works is essentially have a set of input neurons and then you have a Set of so-called hidden neurons and then you have a set of output neurons So essentially between these layers depending on the particular implementation The data is somewhat is sometimes persisted to disk um, also if you are gathering data on an input layer More often than not actually you have to go to a disk in order to get that to get that data Same goes on the on the output side. Um, and of course it all takes time. So the idea As depicted on the slide is actually to take the data to the model So what redis ai really is it is actually a combination of a neural network and redis write Right compacted into a in a singular desk space So the idea is to you you you load a module and that module hasn't hasn't embedded tensorflow instance Has an embedded pytorch instance Or an embedded on an x instance and these are the supported back ends for for redis ai And in contrast to other approaches the data is already stored in the redis instance. So the um The module doesn't have to go to disk in order to get the data, but it just queries the Contents of the local redis instance sitting pretty close to it. That's that's the overall idea And hence these two new redis data types the tensor and the model Um a tensor hence tensorflow is actually a representation of data and the model is your back propagation network i.e tensorflow A tensor a tensorflow a pytorch or an or an x model Um, the use cases are straightforward Anything where real time is required in the area of deep learning is the sweet spot So essentially what redis ai is is a real time step. It's a real time step for deep learning and machine learning applications um, so this is basically the The overall approach to combine as i kind of already alluded to to combine A back propagation network engine or implementation rather and redis all into one kind of a server side implementation But now on to redis gears What is redis gears redis gears? um Essentially, it's a serverless engine meaning you have a component running rings or in running inside redis that allows you to Do certain operations? And we're going to go into the into the details in a moment But essentially it's an ecosystem that allows you to map data to filter data to aggregate data But also and this is the important bit Um to ship that data off to a to to a business logic implementation What this is i'm going to go into into on the next slide, but but suffice it to say for the time being essentially Gears allows you to take business logic From the client side ship it to the server side and have it executed in the server context So in contrast to other approaches where you would typically I have multiple round trips between the client and the server side for example um a business logic implementation on the on the application side Would first afford query data would then process that data on the on the client side and ultimately ship this back The so-possessed data to the server side in order to have it stored In the in contrast to this redis gears ships the application or let or lets the application Ship a certain amount of business logic to the server side no more round trips Because the business logic is then executed as part of the as part of the server context Um as a result you have a dynamic framework for data flow implementations because that's exactly what it is um as we will see in a moment Essentially the idea is you define your data flow and then redis takes care of Implementing this on on the server side Um and of course redis gears also provides an abstraction layer for data for data distribution a clustering and deployment because of course You can use redis gears in the cluster of nodes meaning that you can distribute business logic across different cluster nodes um for those of you who know Map map and reduce coming from a company called google very very similar So the idea and that's exactly how redis gear how redis gears works actually internally um You ship the co-application coated server side you ship The data is already kind of in the cluster The rim then redis gears takes care of distributing that business logic across the cluster nodes Let it execute on the on a particular node and then gathers the results back on to onto a single server instance ready for shipping back um either to sorry first of all of course storing it to the date in On to the into the redis instance and then ultimately shipping the result back to the to the client side implementation As in the application itself so in a nutshell it's actually A pretty sophisticated Approach to a data flow implementation on the on the server side. Let's see how this is implemented actually as in the principal architecture I kind of already touched on cluster management There is of course an execution management that comes into play In terms of the cluster management takes care of the more infrastructure oriented aspects like how many nodes there are in the cluster How these nodes can be reached and they and the execution management actually takes the business logic schedules the whole thing across the cluster and then Manages the execution Of of this workload and I can have already touched upon rep and rep and reuse These three components work in i'm almost i'm almost tempted to say in tandem but in close cooperation because that's exactly what this redis gear core makes up Um, this at the moment is abstracted away by a c api And that's exactly where python comes into play at the moment um The redis gear the redis gears implementation has an embedded python instance meaning if you if you take the code from github you compile it And and as part of this module compilation a um python Sorry a c python instance is actually compiled into the module um c python uses the c based api towards kind of In terms of language bindings and so forth um to talk to the redis core um Other other languages providing a similar Binding mechanism are already kind of planned. So we watch this space The next language that is probably being released is going to be java because Similar to python once you once you support a c api essentially you can talk the redis gear core So how does it work just just let's take a look at a typical code snippet um so the top half of this slide is actually the The code that you would ship to the server side and that's exactly displayed in the in the middle Of this slide. So um the gear is the python snippet at the very top you essentially Ship this to the server side redis minus cli is the command line interface of redis the Command is called rg for redis gears execute and minus x means take your standard input And append it to whatever I tell next. So essentially what this means is You say you tell the server. Here's the command redis gears execute um And here's the gears or here's the gear implementation as depicted in the top half of the Of the of the slide and then execute this actually on the server side And that's exactly what you see on the very bottom of the slide as in as in the lower half of it So set foo test. These are redis native commands um Essentially set a key Value to something called test. So this is a typical string operation And the key foo one is set to this is a test What this gear depicted on the slide actually does it um First of all, it extracts the values from the keys and this is exactly the the gb.map Invocation the gears builder just creates a gear on on the server side. So gb.map um, essentially takes the contents of the database So in that case that would be foo and foo and foo one And basically extract the values then gb.flatmap would Take these records and split them based on spaces Meaning you get for example for the strings. This is a test You would get the sub records. This is a test. So essentially flatmap takes a record and produces four um For sub records based on the split And gb count by is then an aggregation function that simply counts the number of occurrences of a key In or of a key value rather in the database. So if you then subsequently say that say that gb.run What this does is essentially it takes all of the keys as in foo and foo one splits them up splits these strings up based on spaces and then Counts the word occurrences and that's exactly what you see in the result set displayed at the bottom of this slide So the test key is Is is is um is counted twice because foo consists of tests and foo one consul and foo one has a test in it And then all of the other substrings are just have one occurrence because There's only one this there's only one is there's only one a so this is a kind of poor man's word counting Algorithm implemented by a gear at the very top um The and this gives you some impression of how easy it is actually to To implement the gear in python so As I kind of or kind of Kind of alluded to the way you do it you basically give the string The string is then shipped as a as a gear to the server side And then it's executed and then the results ship and then the result set is shipped back Simple as that so now for the demo. I still have some time left. Okay So quickly let's go through this Essentially, is it sound it's a time series data prediction approach or demonstration rather the idea is to read via time series database typical weather metrics like humidity temperature and wind speed Then using a gear feed this into a pre-trained model Because at the end of the day redis ai is mostly focusing on an inference of data so it requires an already kind of trade model for the back propagation network and then Again part of this gear Takes this predicted time series data because this is what the What the model is all about basically it takes it takes a cup of time series values as uh contained in the time series database and based on the training it tries to predict um future um future values so what you typically do is and the overall inspiration for this model actually comes from Comes from tensorflow comes from the tensorflow core documentation, but it's actually where the where this very kind of thing is kind of Uh used as a as a as a demo for tensorflow so the idea is basically is uh, you take a couple of of time series data and Data items and based on this time on this sample essentially you predict the um The or you forecast the weather rather rather on a on a on a on a on a period basis and that's exactly what um what this thing does basically it takes a um a history of say a couple of hours of the last readings and based on this history it predicts in a single step the next values um for temperature humidity and wind speed Um, finally this is wrapped into an html um web page and then displayed On any web browser connected to the server and let me show this And this is what it should look like. This is actually the weather prediction demo Um, it should auto refresh apparently it's no it still does so essentially this is the time step over a series of time over a series of times um the history is displayed as a blue graph The true future because i'm using a simulated set or i'm using a pre-recorded set of of values unfortunately I didn't find in the shortness of time. I didn't find real life sensor. Um, that was able to do for everybody So this is a simulation of a time series database um The idea is essentially that the history displays the the the history as in the run up to the to the point in time so far The model prediction is indicated as a green spot and the true future is actually indicated by by a red x Meaning you have a nice indication of the deviation between the predicted model and the true future Based on the on the on the recorded on the recorded history of the time series Going back to my no Further reading, um, you find the documentation for redis on something called redis.io You find the guest documentation on redis guest.io Redis ai is funny enough documented on redis ai.io and when I say documented, there's a quick start. They're typically, um It tells you how to instantiate the module in a docker container to pack together with a with a with a redis instance um, but also typically contains the structures how to compile a module As I said the I didn't invent the time for time time series forecasting demo. Um, the Uh basis for this you can actually find as part of the tensor flow. Um, um tutorials Funny enough in a section called time series and of course another kind of very short commercial break There's a redis labs university powered by redis labs that allows you to Know more about redis free of charge simply go to the site register using your email address. Um, then essentially Get a register get a registration confirmation Select a cup of courses pass these courses as an kind of enroll in these courses Pass the final test and you get a certificate if you then post the certificate on linkedin We send you a t-shirt end of commercial break Any questions are more than how happy to answer. Let's go back to the to the chat um so So, yeah Right on time. So unfortunately, we don't have time for question then Oh, yeah, the slot is then 245 then uh, it's lunch break. Uh, but you know, uh, people if they're interested they can of course Ask questions in the chat, uh, even the bigger room is free to use So, um, I'm sure that people would be happy to have some interaction there. Yes, I'm happy. I'm another I'm available for another 15 minutes at the break our room associate with the session looking forward to seeing you there And thank you for our participation Yeah, thank you. Bye