 Hello, good morning everyone. Welcome. So this is the first talk in this track and today Friday is the last day of the conference days She has a reminder that we have a sprints Saturday Sunday. So if you want to to share collaborate with others That's the best time. And if you want to host a spring, there is a still time to do that And also remember that tonight we are today we are starting late because this Friday We are trying to be friendly with people in the Americas. So we are going to be finishing around 8 p.m. European time So now the first talk of In this track today, I'm going to say hello to Gael and Francesco Hello guys, how are you doing? Hello, I hope your name currently is Gael. Yeah, that's that's good. Cool Perfect. Where are you singing from? I'm in Lausanne, Switzerland I'm in Zurich Cool. Nice So, yeah, thank you for being 4% 2% here in Europe. I don't You know speakers are super important. Otherwise, we don't have a conference So Francesco and I they are going to be presented together and they are going to talk about that unifying time series forecasting models from Arima to deep learning. I know she was Arima But I'm sure you're going to spend that now So let's put your screen Yeah, perfect. Cool. So good luck and see you later. Yeah, thanks a lot So let's jump right into it. Thanks for for attending this session So we'll talk about time series forecasting and we'll show some examples using a library called darts Which makes it easy to use like statistical method, but also more advanced deep learning methods for forecasting problem so As you know today, I'm with Francesco So is Data Scientist at Unitates And one of the main contributors to darts and on my side also Data Scientist at Unitates and I work with time series in different industries like telecom manufacturing or energy So maybe before we jump into forecasting topics too quickly, let's maybe First approach why we want to we are interested in forecasting why it is an important problem And also maybe give a bit of context of why we develop darts and and how why we think it's it's helpful So the first thing that you probably are aware is that time series are are really everywhere so time series basically any data that can be displayed over some fixed time interval And we can see a lot through I don't know temperature, for example If you have looking at website traffic or network events, you also would get time series One that would monitor, I don't know brainwave in in the brain. They would have EGC-NAS time series And in companies your funds you can have sales figures that are also time series And this time series that tells us about the past or maybe even until now But unfortunately they don't necessarily always tell us much about the future and that's where forecasting can be a useful tool So forecasting is trying to based on the past data that we have try to anticipate What will happen and predict a bit more likely scenarios So that's maybe we can take better decisions For example in the case of co2 levels if we can anticipate and see they are rising Try to maybe fix that and find a way to Average global warming that will be an example But in the context of companies it often appears like for predicting maybe Demand or price for given products That's a fairly typical scenario And while we are introducing darts So internally we are work we worked on a few quite a few time series forecasting problems And we noticed that although there are many Forecasting libraries in python they often are specialized within one Topic so you have for example A stats model for statistical methods Then maybe you have some Profits library for for some from probabilistic forecasting Some other specializing deep learning and it's hard when you're doing Forecasting analysis to to kind of combine these methods. It takes quite a bit of time So that's why we build darts to darts to try to unify the Building this model and have one interface to make it easy to use all these different approaches As well we Notice that a lot of methods we're using we're kind of we were writing them again and again Like to evaluate the performance of models. And so we built in a lot of Useful methods inside darts to to make that easy and avoid Rewriting the the same things over and over Um, darts was created internally in 2018 Last year one year ago. We released the first public version of the open source project And since then we've added a lot of features around now in version zero nine Which introduced like forecasting capability probabilistic forecasting capabilities And uh, we plan to to add even more features in in the next months and years So, let's jump into uh forecasting and we will walk through through one example Of how you can use what are the basics of forecasting and also How we can use darts to kind of easily, uh Apply it to your forecasting problems So we're going to use one example, which is the monthly, uh Cow milk production. So the average Uh production of milk per cow and this data goes from the 1960s until the end of the 1970s And we have the average per month And we can see, uh, here, uh Time series that is increasing and you can see some yearly as patterns that are repeating And what we're going to do is we're going to use the data in black. So using that as a training A training data sets to then create a forecast that will hopefully approach as best as possible The data in blue that we have the true data So in darts we have one object That is the time series object and that's the main abstraction we have for working with time series And that objects, uh allows you to easily import data from the different tabular format that you have in python Uh, but also maybe do some processing on your time series And, uh, if you want to import data into darts, so you can Call this object. So from darts import time series And then if you have, uh, either pandas data frame or maybe some other, uh, arrays You can then use this time series object to import your data. So in our case here, that would be Time series that from data frame And then there you will need to give three things. So First we have the data itself df then time series is, uh Some values over time. So you need to give the column, which will be your your time interval in our case months And then you have the value columns, which is in this case the the milk ponds per cow value And then, uh, from there you get your series object Then, um What you always want to do in your forecasting program is split your data into two sets One that will be the training set where you're going to try many models many parameters And then you have the validation that we'll use to just benchmark the performance of the model and you can do this easily by, uh Using your series object that you have from before and then, uh, call the method split before Which you can give either a date in our case 1973 Or you can use also a fraction. So if you want to split maybe 70 percent of the data to be our training sets Once you you've done that we can start jumping into maybe the more interesting part and trying to apply some forecasting models to to our problem So we're going to use the training sets and use some models to kind of train on that and then predict the rest of the interval, um using the model So in darts, there are many models that are available, uh, coming From different packages like stats model, facebook profit, but also some some custom models that we developed And you can import them using from darts that model import exponential smoothing in our case And exponential smoothing is a model that, uh, breaks down the time series, uh, into, uh, three aspects One is the seasonality One is the trend and one is the the level of the value And it's a statistical method that we look at pass value And then put a weight on the pass value to infer these three values and recombine them at the end And it's applies fairly well, uh, to our to our problem because Since we have, uh, a series with a trend, but also a seasonality That model is able to capture, uh, these different aspects Uh, and restitute them So the way then we can use this model in darts is we Instantiate the model and put it in a variable And then we use the scikit-learn kind of fit and predict approach Where you can, uh, fit your model on the training data So the the first part And then, uh, predict a given interval. So in this case, we want to predict the whole The whole size of the blue, uh, data And gets a forecast that is displayed here in you And that's where you have generated the forecast, uh, that you can, uh, display and and use for forecasting applications But maybe during that step, we typically want to also try other models And, uh, similarly you can import another model, uh, that is the model theta This one is a model that will, uh, capture the seasonality and remove it from a type series And then, uh, fit two curves on the rest of the of the data on the residuals Uh, and it's has been fairly popular in recent years since it's performed well in some Uh, forecasting competition some years ago Um, and similarly to the exponential smoothing we can apply the same stats Same steps import the model, uh, fit our training data and then, uh, predict the given Length that we want and then we get a forecast here And this model also has some parameters and you can If you know a bit more what you want to do or how you want to forecast your problem You can specify these parameters, uh, during this instantiation of the model and In this case, there are three parameters here that you can specify One is theta and that will, uh, tell the model how much it can kind of, uh, Trend and follow the trend of the data So a lower score, it's not following the trend much and a higher score is following the trend more And then you can also, if you know the seasonality of your time series Instead of having the model kind of infer it automatically, you can give it So you know that it's, uh, capture the right seasonality And then, uh, the last, um, parameter is the season mode which, um, you have trends that are Can be either linear or so increasing by a fixed amount every year Or in some cases, if you have like a percentile growth every year It's going to be more of an exponential or a multiplicative season mode And you can specify that here which one you want to use And then you apply the fit and predict like you read it before So now we had two models And, uh, we want to know kind of which one is better and which one we're going to use at the end for our forecasting application And visually it's it's kind of maybe a bit hard to tell exactly which one is better So we see the exponential smoothing maybe has a larger error here in these laws But kind of the peaks are maybe slightly better Whereas the theta is the opposite But visually it's hard to say so to answer that question We're going to use some some error metrics And that also provides some, uh, the standards error metrics that you You would typically use Um, so one of them is called a mean average percentage error Which kind of computes the difference between your forecast and the true value as a percent percentage And roughly that gives you a percentage error of your forecast And you can easily apply this to to the data we have by importing the metrics So from darts.metrics import mape And then, uh, we can give it the validation Which is the true value and then the forecast we generated and that will generate a score based on that You have also, uh, many others, uh, metrics, but One that is worth mentioning is the mean absolute, uh, scaled error, which That one is, uh, uh, a metric that compares An eye forecast so taking the previous value against the value you generate from your forecast Uh, and it's also, uh, fairly common Used one And here, uh, since it uses the previous value, you also need to give, uh, the training set As a parameter, but you can also easily generate the score As seen here So if we do this to our two methods, uh, we can see that the mean average percentage error for the exponential smoothing is 3.4 percent Whereas for data is 2.42 percent. So, uh, we can see that data, uh, according to this metric is performing better And probably that's the one we should use So now we've, uh, simulated, uh, kind of an easy forecasting problem But in practice, uh, typically, uh, we don't have one single interval where we just predict once the data What happens in practice is that we will have a, uh, forecast running So it will train on the data up until today and then forecast kind of the next maybe few months in the future But then in three months in the future, you you're gonna have more data That you will use to train again the model and then predict the next three months and so on So that approach, um Is not captured here and so this is kind of a simple simplified Version of the real problem But that provides some, um, some tools to kind of simulate a more realistic forecasting scenario And these tools are called historical forecasts and backtests And they will simulate this kind of methods that are described Where we are retraining the model with the new data as time progresses and And then generate new forecasts So, um, maybe a plot will help us visualize a bit that approach So assume we are today and we have, uh In white all the historical data So what we're going to do is we're going to simulate being, uh, in the past Maybe at this start date here And having only access to the data from before And then generate, uh, a forecast with a given horizon. So for example, the three or next six months Um, and then from that we can, uh, for example, get this forecast and compute a score on how well This forecast we generated here, uh, performed And then what we can do is we can kind of move this time point, uh, in the past and Generate a new forecast As we go and then we kind of simulating this process with historical data of kind of Retraining the model and producing your forecast, which is a more, uh, Kind of realistic scenario So to do that, you have, uh, your model, a variable that you define and you can call the historical forecast method Uh, this one will generate all the forecasts for the this, uh Intervals that you define. So what you need to give it is a series. So the series of that you had before with all the data Uh, here we're going to start at half. So Sometimes if you're only, for example, training a model with very few data It's not going to generate a good forecast model. So that's not A realistic scenario. So you we're going to use at least 50 percent of the data to start training our model And then we're going to predict the next 12 months. So here we can see, for example in blue the The interval we're going to predict and then every six months We're going to retrain our model and do another prediction. So that's where you get all these colors That's going to be all the historical forecasts you would have generated So here we have the the data of the forecast But what we can do is use, uh, instead the back test methods, which is going to generate the same forecast But then use an error metric to kind of compute how each of these forecasts perform So we have the same parameters that we had before But in addition, we can give it a metric in this case the the one we used before the mean average percentage error And then for each forecast, we're going to get one value of the error and we can for example, plot it as uh an histogram and that histogram is fairly useful to To know maybe are all the errors kind of centered around the similar value or maybe does the forecast sometimes Generates very wrong Estimation and in this case you would see maybe a few spikes far in the error In some cases that's something you want to to avoid to avoid generating like two big large errors If then you you are taking decisions on that data Um, and finally what you can also do is to the same back this method you can also Give it an aggregation. So instead of having different values for the forecast, you can aggregate them and take only the mean error And that would give you one score that evaluates kind of Your model on all the data and simulating this this process In this case, that would be the blue line here, which is the mean error from all our forecasts Um, so that's how you can evaluate your models and now we can even go one step further and Kind of use that score on historical data to Maybe automatically find the right parameters for the model. So typically you're not really sure What are the optimal parameters? So you can kind of Have try many of them and then see how much well the forecast is is performing That is called some kind of as is called hyperparameters search and uh, that's something you can use to to maximize your accuracy So that's here again, uh, gives some easy method to to do that So what you can do we can do it on the theta models that we had before And I talked about this theta, uh Parameters and here we're going to give a few that we want to try so from 0.5 to 3 And then we're also going to try different, uh, seasonative mode like the multiplicative one and additive one So then you can call on the model the grid search method that will Go through all the combination of these parameters. So maybe you should have a given too many of them, but And then you can give the same Parameters that we had before so the training the horizon and the starts as well as the metric errors and Then applying the fit and predict method you can get The best forecast out of all the parameter combination So if we apply this to our model Then at the beginning we had the default data methods with theta equal to and seasonative multiplicative uh, which yielded 2.4 percent error roughly and here, uh Exploring all the combinations. It seems that data equal equal 3 and additive mode is performing slightly better Um, then our previous model and maybe that's the one you would want to use in your forecasting applications And so I walked you through How like the basic of forecasting and now Francesco is going to talk about some more advanced applications that you can do using dots And I will leave him take over the presentation All right, uh, thanks gael. Um Okay, so my Okay, now it's up cool. Um, so thank you So what gael has presented you so far are really the core functionalities of darts And in many cases you can expect to get fairly good results by applying these sort of classical methods such as Exponential smoothing uh and theta in conjunction with proper model evaluation and selection procedures So, uh, what I'm going to present to you in the next two sections are a bit more advanced features of darts That can help you get the most out of your data And these are also fairly recent additions to the library And the first one I would like to talk about is the possibility to train one model on multiple different time series So, uh, let me introduce the concept of training a forecasting model on multiple different time series And its usefulness with an example So this graph shows two time series which do not overlap each other in the time dimension on the left in black You see, uh, the monthly number of airplane passengers and on the right in blue You see the amount of milk produced per cow in pounds. So that's the time series you saw before So now we want to create a prediction for the other time series However, we want to explore the possibility of also using the milk time series to get a better model performance by means of meta learning So the idea is that although there is no direct causal relationship between the two time series Training on milk production might help the model to learn some general concepts about yearly seasonality Combined with uh upward trends, which it can apply to the air traffic prediction as well So, yeah, again, so that's the one one to predict and we're just going to use this sort of as additional training data So let's try to do this in dark. So the model we're going to use is m beats Which is a powerful model which produced state of the art results on the m4 competition data set Which is a big time series data set At least it did produce state of the art when it was first made public So n beats is actually part of the collection of deep learning forecasting models in darts Which we implement in pytorch So first as a baseline, let's simply train our model on only the air traffic data And doing this we can achieve a reasonable performance with a mean average percentage error of roughly nine percent Now let's slightly adapt the code from before And train our model not only on the air traffic data, but also on the milk production time series So we have to change two things for this on the second line We need to include the milk time series in our training set so we go from just passing one single time series to passing a list of two time series And then in the third line, we need to specify the input time series to the model before creating the prediction Because otherwise the model cannot know which time series to forecast and As you can see the result is a significantly more accurate prediction. Now, how could this be? And again, this is an example of meta learning So fitting a model on a different but similar time series helps it to learn general properties of the time series that it is predicting Of course, this is a very simple example, but there are many such cases where meta learning can be useful For example, imagine sales of different products You could train on many different time series when just wanting to predict one or Energy consumption of different households, etc. So again, those would be different time series But they could share important common properties So the darts library provides all the tools needed to perform meta learning with highly powerful deep learning models All the while providing a very clear and simple abstraction to the user Now let's move on to the next feature of darts. That is a fairly recent addition as well, namely probabilistic forecasting And before we go into what exactly probabilistic forecasts are and how we create them Let me first try to answer the question of why we need them in the first place In other words, what's wrong with the forecasts we have seen so far? And um, so to see why normal so-called point predictions might be insufficient Let's consider the following time series as an example So on the left in black we have a Uh, or I mean just in general We have a time series with a clear seasonal component that should not be too hard to predict On the other hand our time series also contains Noise and this noise component is not the same across the whole time series. In fact, its strength Is oscillating as well So this lower time series just indicates the Fluctuations of the noise intensity over time. So this is an artificial time series But I think it will help us to sort of understand the usefulness of probabilistic forecasting So let's say we want to create a forecast for the validation set in blue. How would we approach this? One of the simplest ways to forecast would be to use one of the naive baseline models of darts And in this case that would be the naive seasonal model So this model accepts a seasonal period and when forecasting simply repeats the last observed value that was at the same point in the cycle and uh As you can see we can create a prediction in three short lines of code given that we know the seasonality period of the time series But as you can see too, uh, this is not a very good result The problem is that the model simply repeats the noise it saw before So let's try, uh A more sophisticated model that is able to separate signal from noise and for this we use arima, which is Autoregressive integrated moving average So another sort of classical statistical method for time series forecasting and as you can see it fairly successfully predicts The signal and disregards the noise So what's wrong with this forecast? Why do we need more than this? Well, the problem here is that not all forecasts you see in blue are equal for example consider this forecast Um, you can consider this a safe forecast because the noise component is weak at this point in time However, if we look at this forecast right here It is much more likely that it will deviate from the ground truth Since the noise component is very strong here Now, of course, this is obvious to us since we see both the forecast in blue and the ground truth in black But in a real world scenario, uh, you would only see the blue line And you would have no idea that one forecast is a much safer bet than the other So let's try out one of the probabilistic forecasting models that darts offers In this case, we use the tcn model, which is Another one of our deep learning models that we offer and we use it in its probabilistic form And you can see that we use it in its probabilistic form because we pass a likelihood model To it when instantiating it and I don't want to go too deep into this right now. We Unfortunately don't have the time But just know that this we're basically instantiating a probabilistic version of a deep learning model here And again with only a few lines of code we can create a prediction And you can see that the result does not just consist of a point prediction at every point in time But also a confidence interval which quantifies the uncertainty of every forecast So here we can see that the model correctly identifies regions of high noise as uncertain While presenting other predictions with very tight confidence intervals So now that we have seen sort of By means of an example why probabilistic forecasts are important Let's take a step back and briefly just on a high level talk about the idea of how darts produces probabilistic forecasts. So How do probabilistic forecasts differ from their deterministic counterparts? So just to recap here on top. You see a deterministic model. It takes as an input a time series Possibly, you know multivariate it can include covariates, too So it just means that it can have many dimensions instead of just the univariate case you saw before And it also outputs a time series of point predictions A probabilistic model on the other hand does not directly produce a time series of predictions As a result instead it outputs Parameters of a given probability distribution and using these parameters We can obtain an arbitrary number of so-called sample predictions So let's zoom in on the probabilistic forecasting case So we can create a big number of samples out of these parameters Each of which is as long as the prediction of a deterministic model would be and actually combined These samples constitute a probabilistic time series in darts And the user does not explicitly have to deal With these individual samples. All that is abstracted away by the darts time series class All the user has to do is to get a probabilistic forecast is to say how many samples they want to predict So after a probabilistic forecast is produced the user can decide how to best evaluate it And one of the most straightforward ways to do so is to plot and evaluate different quantiles Which can be used as confidence intervals And as you can see it is very easy to plot confidence intervals in darts Just using the time series plot method and all you have to do is Just to have a bit more control here You can define the upper and lower quantiles to sort of decide how wide your confidence interval would be And now So we looked at a sort of example before but it was an artificial example So maybe let's just briefly look at a real world case of where we could produce a probabilistic forecast So here we have a time series of the daily average energy production of a hydro power plant and this time series is quite structured with a monthly seasonality and looking more closely we have Monthly spikes that have quite predictable shapes, especially if you have observed the previous ones Uh and between the spikes on the other end the values are a bit less predictable. So Let's try to train a simple darts rnn model. So this would be another deep learning model that also supports probabilistic forecasts and um So we we again are going to instantiate in its probabilistic forum by passing to it a Gaussian likelihood model And uh when fitting the model we also provide covariates, but let's not talk about this too much here because um We don't really have the time. So we're also not showing how these covariates are created But darts provides very easy to use utility functions to create covariate time series But please I encourage you to check out our documentation to find out more about that Also when creating a prediction we specify as you can see in the last line the num samples parameter in the call to Yeah, again in the call to the predict function Which will give us a probabilistic forecast And with this code we get the following 100 day forecast And as you can see the model roughly captures the general pattern of the data Moreover, it expresses increased uncertainty for the sections that are harder to predict And if we compare this to the actual data Uh, we can see that the parts of the data where the model made the biggest mistakes Are also the parts where it expressed the biggest uncertainty So if we have a probabilistic prediction like this we can sort of gauge How much we can trust the forecast and as a result how much action we should take based on the predictions So I realize that again, you know many details, uh, I haven't really told you about again the likelihood models, but again Please feel free to check out our documentation here so In this example with energy production So this this is just one example You could imagine, you know countless other cases of where Probabilistic forecasts might be useful, you know, for example, let's say you run a website and you want to make sure that your Infrastructure supports the web traffic to your site You know, there might be days of the week or times of the day where traffic is less predictable than at other times And a probabilistic forecasting model might be able to predict those uncertainties And given a probabilistic forecast You could make sure that you allocate enough resources to make sure your site does not get overwhelmed during times of high uncertainty Okay, so before we wrap up this presentation, I would just quickly Like to give you a brief summary of sort of the whole darts toolbox I'll just have to Check the okay, 10 minutes left So we presented a few of the most important features in the previous slides But we definitely did not cover everything that darts has to offer So the heart and soul of darts is for sure, uh, the forecasting capabilities So some of them you encountered during this presentation such as statistical models we saw applications of exponential smoothing in a rima for instance, um We made use of some deep learning models as well such as n beats and the tcn We also outlined the main ideas concerning probabilistic forecasting and also training on multiple time series for meta learning But there are quite a few forecasting sort of tools We didn't mention for instance You can also make use of assembling models to easily combine any number of forecasting models into one allowing you to sort of You know make up for the weaknesses of one model with another model, uh, just, you know On a very high level, um We also support many different kinds of uh time series types So in this presentation, we mostly looked at univariate time series without covariates But we also support multivariate time series and covariate time series So again, I repeat myself, but please check out our documentation and example notebooks too We have a lot of them and um, there should be a lot of information So in addition to forecasting darts also provides tools for everything that's important before in between And uh after forecasts, so when it comes to the first stages of solving forecasting problems darts offers a variety of tools for analyzing the raw time series data as well So, you know, you have sort of data exploration Tools, you know, you have functions for computing important statistics such as autocorrelation And various other tools for plotting and visualizing relevant information And you know after this initial analysis usually there is a need for Pre-processing before feeding the data to the models and here we also provide many functionalities such as Say interpolation of missing values scaling and normalization utilities and also seasonality and trend removal options. So of course After you sort of prepare your data and you create a forecast the story doesn't end there You want to evaluate them in a rigorous manner to select the best type of model for the problem You're trying to solve and you use things like historical forecasts back testing grid search All of which skyline has introduced in the first part of the presentation But there are also other functionalities used for evaluating forecasts that we didn't have the time to discuss such as residual analysis for example All in all, I would really say this is by far one of the most comprehensive libraries for time series forecasting in general Not just in python But of course we are constantly working towards making it better and contributions suggestions, but also criticism They're very welcome and For this, you know, so You please try out darts to sort of get a feel for it If that's something that could be useful to you so you can simply install it using pip and Check out our github. Please Also, check out our blog post if you're interested and again, if you have any suggestions or any other comments feel free to send us an email as well and Yeah, with this I'd like to finish things and thanks a lot for your attention And I guess we can take questions now Okay, thank you guys. That was a that was a really nice talk Um, we have a lot of questions. So I'm going to mention so we have five minutes So let's see how much we can do in five minutes and then You can take the rest in the in the breakout rooms So let's go to the first one Yeah, so people want to know if those darts enter into the Sql ecosystem Um, okay, so if it's okay, I will answer this one. Um, I would say rather than darts integrating into the sk learn ecosystem would say parts of sk learn integrate into the darts ecosystem as in um, I mentioned that we have assembling models and One way to assemble predictions of different forecasting models Is to actually use regression on these forecasts and for this we are really open to accept sort of um, sk learn regression models um To sort of enhance our existing forecasting capabilities. However, do we Specifically have we specifically developed something to to be more compatible with sk learn? Not quite not quite. So this might be something we will do in the future, but uh, I can't tell you much more about that yet Okay So next question is this one Those start supporting forcing known constraints. For example, any of shoes sold can be less than zero that's uh, that's a very good question and We don't have this across the board right now. So we what we I would say no, we we have not implemented this feature yet and that's a very good point Okay so next one About meta learning can we impose our dhg? I don't know what's that to represent castle links So, um, definitely this would also be an advanced feature that we have not tackled yet um, so dhg meaning, uh, direct direct today cyclic graph, um No, we have not and uh, I mean would definitely be very interesting, but uh, That would be a very advanced feature that that's not included in dhg Yeah, I think it's something. Uh, yeah, it is integrating it would be complicated, but maybe worse yet not to say This is one. How did you create the plus we got to see on the slides? so From the series objects, you can call plots which uh, often returns Uh, pretty good already plot and then it's based on matplotlib. So you can Add extra features if you want the same way you would do for for any matplotlib plots or customizing the legends or Titles and and all these these aspects. So this can be easily done from the time series object Okay, cool. So I think we have time for two more Uh, I think this one is easy one. So I notice that darts include profits Are there any plans to support great kite also? Sorry, I I mean, I don't have uh, too much knowledge of great kite Yeah, I can't really But we will check it out. Thanks for mentioning you can you can have a chat in matrix out of that then So I think let's say this is oh, we have time so we can So the original n bits is un-variative Did implement my multivariative version? Yeah, so that's correct It would be univariate the original version. We have sort of adapted it to also support multivariate time series, but Um, I would say we didn't really look deep into the architectures to sort of optimize it for it So yes, you can use it for multivariate predictions But um, it's it was sort of an ad hoc adaptation that we did Okay So One more and this is the last one Can you please briefly explain what kind of meta features are used to allow the new data? Okay, so I suppose this uh addresses meta learning and so um If if if your question regards the oh, you know, what kind of features and the data Is our models picking up when you know learning on multiple time series Then I suppose, you know, those can be basic things like trends and seasonalities But I I'm afraid I I can't tell you much more about that That's okay, I'm not sure I understand the question completely Yeah, so that that was a question from Sai-jan or Sai-jan, uh, maybe he can Ask you later in the chat So thank you very much. That was a nice talk The next talk is a keynote. So if people is here, you have to move to the optiver track To see the the keynote and you can find gaelle and francesco in the breakout rooms And I would copy paste all the questions there so they can reply later So thank you very much. Thank you for being here in Europe. Thanks for having us. Enjoy the rest of the conference Thank you for the session. Bye. Thanks