 Thanks Martin. So now I'm actually very nervous because coming after Mabel right, pardon my coding. Okay so it may not be the best code and cleanest one. There's no type scripting. Okay so pardon me. Okay so as Martin introduced me I'm a bit of a jack of all traits but master of none. Okay so my main objective here is just to get it to work. Okay so I feel like a student getting marked by Mabel. Okay so the topic that I'm going to touch on today right is the automatic cube creation but actually the whole idea is to have a small program that will allow the end users to interactively create a Python BI Analytics 2 platform. Okay so that they can create dashboard perform some analytics. Well skip the technical part first. Let me show you what I'm trying to say. So I have this particular notebook that I've already run through. Ignore all the coding. So here I have a button. What I'm going to do is click on it, select a data set. So here I have pre downloaded some data sets from cargo. So I'm going to choose the avocado.csv. Okay and here once I select that right I have the I have a list of columns that is from the data set. Okay so the next step now is to choose a set of keys. So what are keys here? So in Atoti keys are meant for us to identify unique data rows in your data set. Why is that important? Okay so first of all when the when your data set is not unique when there are duplicates based on these sets of keys right the last uploaded record will actually overrides the first the previously uploaded data. Okay so we will only keep the latest unique data set okay in the cube. So the other the second reason why we have the keys is that because Atoti actually dissonant rise based on these keys so that it will speed up your querying. Okay so the the query performance will be much faster in that sense. Okay so in case you know you don't know anything about the data you can always choose none. What does this mean? It means that all the columns are used to identify the unique rows. Okay so if nothing choose none. Okay so in this case I'm going to choose date and then type year and region and once I submit that you can see that my program starts creating a session and then it will actually take the data that I've uploaded right and create some tables Atoti table. Give it a little while you know then here you can see that I actually have a progress bar that is moving as my my program creates. So then once data is created into the cube voila. So now I have a BI analytic platform from one CSV. Okay so can you imagine your user is you know like not very savvy you know but they have a data set that they want to analyze right. So if you create this small little program you give it to them and say hey now you can actually analyze huh but they stare at this. What do I do next? Right I have a a GUI here so next we go and create a new dashboard here. Okay I'm going to zoom out a little bit it's very big let me just reset it a little bit. Okay so here right on the left hand side we have a couple of drawers so we have the content editor. I'll come back to this later on and then we have some filter editor widgets as well as style editor and on the top here we have the rebounds. So I think it's pretty intuitive because it's similar to how your excel works now right you have the rebounds so then you have some editor on the left. So what I'm going to do now I am going to say I'm going to create the years let me select the years and then I'm going to say I want to know what is the total volume of avocados I've sold across the year. So now I have a table and if you're not happy with a table okay I'm not happy with the table I want to see a trend let me switch over to a line chart easy peasy right okay so now let's say your user say this is not very interesting I can split it again by the type so now I have the trend of the conventional avocado versus the organic avocado okay and of course you can drag and drop even more of them let's say for example I want to compare the across the years okay I said I want to compare maybe the sales of 2017 against 2018 okay and then I say let's look at the total volume. So now here you can see that we have the difference across the two years. So the dashboard is yours to build I mean you can play around with the different kind of visualizations you know you can have a pivot table that allows you to drill down let's say for example I can say for each region I want to look at the sales the dates for the dates so maybe I want to see the large bags small bags and the total bags right and then let me just collapse this a little bit and we can add a bit of storytelling some some interactive component for your end users so let's say for instance now I select region I can multiple select you know like I want to add in let's say California Chicago okay if you don't want to show them as a multi-select you can always change it to single select for instance that at any point in time they will only be able to select one so the question now to you is how much time would you take to develop this as an application yourself you know it's for one data source for one cse file is it worth the effort to build a whole bi analytic platform not really right so that's the coding about python because there are so many libraries out there right so you just need to have an idea look for the correct library piece them up like Legos and then you get something like this you know so now I can easily go back to my notebook I click on upload again so another data set that I've downloaded from cargo ds salary okay so now I reset the whole program again and then I say okay based on the work year or maybe I will just choose none this time around okay I don't select any key and you see here I'm deleting the existing unnamed session to create the new one what this means is actually because I'm a bit lazy you know like I don't perfect it but rather you know I just want to explore the data right I want to allow my user to explore the data anytime they are ready so I will destroy the previous session re-instantiate and create a new one so now the users can actually go in and then look at what is the salary for this job title for instance okay and then maybe now you see the problem here now is my work year becomes a sum I have a mean and a sum I don't know if it's big enough for you to see okay so I have mean and some for every columns right so in authority when you have a data set okay there's two types of columns we have okay numerical and non-numerical normally when we want to look at a business matrix we look at the figures but along some hierarchies right what hierarchies like for example company location right so in this case year should be a hierarchy but it becomes a matrix now that's because I didn't select it as a key so theoretically I should have selected it as a key then it will be created as a column then I will be able to query it okay now so far okay am I boring you all right so as you can see we can easily build everything together so now let's take a quick look here you can see that I've created and save a dashboard I could actually save it so just I didn't perform a save so it actually becomes you know like blank here but let me just go back to the technology behind this let me just zoom a little bit out okay so in this particular notebook I use a couple of libraries but most importantly I want to highlight two libraries okay I guess you know for the interactive component can y'all guess what library I use no no I use ipyrejet okay so ipyrejet allows you to have this kind of you know float bar progress bar you know the multi-select as well as the buttons upload button so and so forth so check it out you know ipyre the jupiter widgets ipyrejets you know there are lists of interactive components that you can actually put into the jupiter notebook okay to have this kind of interactions here okay so that's the first part to having this program here to allow your users select their own csv and then create a bi analytic platform so second part easy right exactly that's why I'm here today okay so a 30 is actually a free python library so you can download it and play along with it okay around with it so now let's take a look I won't go into the details of how we integrate you know the ipyrejet so so long as you know how to retrieve the data source pass it on to the next function I mean then you can piece up this whole program right and anyway it's available later on I'll be sharing it with you now let's look at the create cube let me zoom a little bit okay so it's very straightforward step one we create a session okay a 30 session so in this session right what I've done here is I've fixed a port to 9090 you can change the port by default you can don't even pass anything to the variable okay when you don't pass anything to the session it will create a random port number for you which is bad okay if you're just playing around trying to explore some data it's fine but if you're going to share your dashboard with someone else okay say for example the dashboard that we create earlier on see now it's broken because I recreate you know the notebook so the data has changed the column has changed but suppose if I want to share the dashboard right that I have here I can just send this url to someone else provided they're on the same network this is localhost so nobody can access my machine okay so then I will want to fix the port so that you know they don't have to re-bookmark the dashboard every time right secondly firewall if you're going production with a actual project right with this then you need to fix it for the firewall okay it's more controllable when you fix the port then step two of course just now earlier on we have selected the csv upload into the notebook right so in this use case I have converted into a panda data frame okay and of course the keys here is the key that you have selected earlier on to identify your unique dataset right so now using session dot read pandas I could actually create an an r2t table and load the data into the table so but of course in a proper project setting we would first of all create a table then we load the data in okay but for this case because I want it to be dynamic so it should create based on whatever the user uploaded right so using read pandas it will automatically create a table structure based on the available column in the dataset okay so the next step is to take this table and create a cube so anyone play with a cube before or lap cube no okay so basically r2t creates an in-memory data cube that would allow you to you know view the data in in different dimensions you can actually slice and dice you know just switch your perspective around like how I drill down just now you know I can look at the year I can look at the company I can switch my view anytime I want okay so that's the beauty of having a cube but of course later on I'll tell you more about our cube it's different from the typical or lap that you have okay and then finally here I'm just calling the I'm just using web browser to open a url so using session dot link okay so this is actually you know it should be a private function but here to facilitate so I'm using the local url but with session link actually I will be able to get the the url to the web application so then it will launch it okay then you can start building it so this is very easy four main statement to create the cube and then voila you have your vi analytic platform okay let's go okay so now we are done with the basic crash course of course if it's so basic then my company wouldn't want me anymore right so I will say adios goodbye okay so now I've created an advanced version just for this this session okay so what can I advance on okay before I do this advanced advanced advanced mode right let's look at let's have a quick recap on what we can actually do here okay so basically the idea is that we create a session okay so and then using the session right in the session we can actually do data loading so we create an r30 table we load the data all the other way around we could use the connector to directly read your data source and create the table so there's two ways that we can go about it and we have a few data type I mean the data connector such as sql sparks pandas uh pake csv numpy for instance so another beauty of python you know if I don't have the connector I'm sure you can load it into pandas right or you can load it into numpy you know like sparks data frame so so long as you can load into a certain format that my connector can connect right can can load right then you can use r30 and I show you the simplest basic format which is one single table but if you imagine like a database you can have multiple tables depending on how your data source is being organized right so you can actually join this table together based on the columns you know similar columns you join them together then you have a snowflake schema okay so by snowflake schema what we meant is that we will have a base store that contains the most granular level data okay so we will use that base table to create a cube and what I've shown you just now is just a single cube within a session within a session you can have multiple cubes if you want so again that is a little bit of how you plan your data model right so you would group like data of the same structure together you know the same storyline you know if your company wants to see the pnl versus you know like intraday liquidity or whatever you know in finance will then you can have different cube and then they are all accessible within one bi platform okay so it's one one session one bi application so far okay okay then let me quickly show you again what is the difference so same thing here I scroll right to the bottom I do the upload so now I will use a financial data set the var the var data set which is the value at risk so first of all you can see that now I explore using a different ipie widget so I use checkboxes for my keys okay you can see that all the columns are being exposed here and I have a drop down list here can anyone guess what is this drop-down list for the types exactly how you listen to Mabel right okay so let's take a quick look at my data okay so my data set because it's a financial use case and in finance right we want to know the daily profit and loss so can you imagine I have a figure per day across 300 over days right and then if I were to store it as a actual data row then I have to identify the key let's say I have instrument code book id then the day the day and then I have the value so then this is multiplied by 365 so your data set is huge but by having a list to store each day that the value for each day right I compress my original data set right but if I were to load this data set into pandas right what do you think will be the data type inferred a string unless you cast it right yeah so it will be treated as a string which no I wouldn't want a list to be treated as string right so that is why over here I could choose my PNL vector as a Boolean array okay and then let me select my keys to the data set and then again I submit it okay so there's some other slight differences but let me just show you once this is created so you can see that here I have flagged out all the numeric numerical columns because later on we will see that I actually created some additional measures in this system instead of just now we saw we have default just a mean and a sum right when RTT creates it so can anyone spot the difference nobody no no this is the default landing page so the main difference is that now I have a demo folder and I have data exploration right let me go into the presentation mode just now earlier on when I restart the session it was a clean page nothing is persist even though I created some dashboard you know I did some things right so by default RTT will not persist anything so the data cube is created in memory your tables are in memory the dashboard the widgets the filters whatever you created is in memory so when you re-instantiate a session everything is gone right but we can actually persist it if we want so that is the first change in this advanced notebook that I have done so let's quickly go back to the code again okay so you can see that in my sessions in my session function other than the port number here I have also created something called I have used the parameter user content storage okay so in this content folder here that I've defined let me go back to my root folder you can see it here and I actually have this content mv.db so this is actually a h2 database okay so this will actually store the widgets that you create or you know the filters that you have saved as well as the dashboards you have saved so now if I want to share with someone else I could actually copy this I do a incognito window say for example paste this and then you should be able to view the exact same dashboard so if you put this on the cloud you know then you have a public ur ip address then you can actually share it with any of your collaborator and the good thing is because everything is created on the fly right so whatever changes that I do in the Jupyter notebook you know the other side will be able to see the changes immediately okay so far the first change okay so now that we have persisted the next thing that I want to show you is that as a user you know who doesn't really code and then you know like I want to do something right do I have to go back to IT you know like I just want to compute something new if I have to go back to IT then you go through the requirement gathering you go through the the development and then the UAT SIT UAT production then oh I got my final project one month three months later right so now there is this function called new calculated measure okay for example I'm going to do a PNL okay so square bracket for this is basically the syntax for OLAPS okay so I can do a M and then you can see that it's auto suggesting what are the available values so then now here I'm going to take my PNL dot value okay and then I'm going to multiply it by another measure say in this case I want it to be quantity dot sum okay so I can add it to view and then ah I've added it to the wrong view because this is actually my selected widget so you can see that I have my PNL here let me just do a small change I'm going to save okay let me go back a little bit sorry it's a little bit small let me undo it undo it ah okay so now I have my PNL here okay so I'm going to do a save okay I save my calculated measure so this is where it gets committed into the H2 database that I told you earlier on and then I could actually select the correct widget now and then I can go to my file and then my save measure I select this and then I apply it so let me go back to the presentation mode so now I have the quantity multiplied by my PNL value to get this PNL here okay so your users do have some control over some measurements that they want to do on the fly without going back to it okay now any questions so far okay nobody's asleep yet right okay one more question are those same sessions all shared by one login or like different users have different sets of saved sessions in the h2 and maybe you can share some well so basically you view this as an application so many users assess the application so there's only one session yeah of course you can create multiple sessions within the jubilee notebook but each one will have their own web application so the new guy comes and like the leads something either a way to roll back to like wow theoretically that is a process outside maybe we can advise on that right because then that's where your geek hub control you know or big bucket comes in you know you have your commit and etc and theoretically you shouldn't be sharing the same notebook because there's one corner as only for each notebook right so if you restart the corner then the session is gone so theoretically you should be having something like maybe jupiter hub where everybody can spin up their own instance they copy a version of this notebook adapted to their own use but as a project right if you're running as a project typically maybe it's here typically you wouldn't run jupiter notebook in production you can run in doctor right usually if you want to maintain it for a long period of time then they can choose not to run in the jupiter notebook right yeah yeah so theoretically you could extract this out and then put into a python script right but then you will lose the ipai widget's interactive component last last thing starts for q&a um does that help to provide hosting of these jupiter notebooks um no no no so it's just a python library yeah so you can use it in python project or in in the notebook okay yeah so of course you know like it's basically right fundamentally ato t is a bi analytic platforms you know it provides you the holistic solution so you should be able to create your own measurements you know your own measures kpis based on your formulas computation so here right in this program i have only incorporated a very simple measure which is called a single value so then of course this is where i bring you back to my product okay going back to the documentation right if we go to the reference okay of course you can install you know and then go through the tutorial how to use it for for your own project or you know for this particular ones okay so under the api references let's say for example i just look at the aggregation module okay so then you see i have long max max member mean medium etc a lot of functions available here and then not to mention that i have like if you want to order your data by some scope you want to perform some functions like date difference or shift the date or look at the parent value so and so forth so if you have a formula then it's a matter of how you use the various functions to put together to create your formulas right to chain them up so it's pretty much like in excel you know you have your excel function and then you formulate them bit by bit and then chain them up together right but however it's interpretable so in this use case right what i am using is the single value maybe it's not very intuitive so basically the idea is that for a member for the members of the level if you have the same value for all the members it will return you that number but if it's different then it will not return you anything say for example if you go to a shop and then for all the hitachi they sell hitachi tv that say they sell 500 dollars okay but then there's one particular model that sells 499 so it won't be able to return you the value for hitachi tv but let's say for maybe some song it returns 999 for all the tvs then single value will return you 999 okay so it's very minute but then you'll find that in some cases we need that okay so just to demonstrate that we can create some measures so you can see that here right depending on how you want to hide how dynamic you want your cube to be in this setup you could actually add it on you can actually expand it okay and to go back a little bit to the the the part on the data type so because atot will inherit the data type from the panda data frame okay so we are actually taking the value that you have selected earlier on and cast it so if you didn't select anything right for the data type we will automatically take the value that is inferred by pandas okay yeah okay so i think that is about it for the auto part so if you guys are still with me i can explain to you the technology about i mean behind my product okay so well so it can be a python project okay so you just take the library you take your own data you take multiple data create multiple table you join them together create a cube you know then you perform your computation etc or you can make use of jupyton new book jupyton lab like what i'm doing now so atot has some custom features that helps us to leverage on to this function right to perform well to to to to explore your data okay we'll see later on to make prototyping much faster and for interactive experience okay and underlying is actually java so actually i'm a java developer i learned python only three years ago okay so underlying is java and then we have the in-memory data cube okay with a bi-analytic platform so the history of the whole product is that this was sold alone and this was sold as another product and then now because we have a python wrapper right and we decided to have an open software so the entire thing is free all right okay so feel free to use it of course there's the oiler check out the oiler first okay everyone's still with me can i continue a little bit more five more minutes i think okay so now to tell you a little bit more about the data cube olap right so typically olap let me just do this okay so we have some dimensions and we have some measures so as i mentioned just now we look we create only one atot table so therefore i only have one dimension which i named table by the way so it's a silly name but sorry so this is actually the the parent table and this is the table that i created okay and i have the for each non-america columns or key column that we have selected right it will be created as a hierarchy and beneath each of this hierarchy right is a single level so by default it can be more than 3d it's called multi-dimensional thing exactly so each of this is a dimension as you can i mean it's a hierarchy rather so then each table that you create is another dimension so dimension actually groups the hierarchy and then hierarchy is the the so-called query that you will query your business metrics around right so let's say for example which building what time you know and then maybe who so these are the the dimensions right the different dimensions that you can look at and then let's say for example quantity all these are measures so all the numerical figures that you are interested in in statistics right are measures so for atot right by default it will create them as mean and sum and then we can actually have dot value okay so this dot value is the single value that i've created earlier on okay so basically this is the structure of a cube so now with this structure right let me just quickly show you so this this is a customized feature for for atot where we have the atot editor here that will allow you to you know interactively build the same thing as what we have done earlier on so i can have my underlying code and then here i could actually drag and drop into the table and then i can have a collapsible pivot table here right and then i will say okay so i can look at maybe say the quantity dot sum for each of this right and then i have the maybe i want to look at the p and l vector dot value as well so here you can see that my p and l vector i have 372 values in my list okay so let's work a little bit more with the vector okay so with the cube rather so in order to work with the cube so i can call cube dot hierarchies okay cube dot levels or cube dot measures to start working with them so these are the three attributes of a cube so then say for instance in this case right i have my p and l vector okay and i'm going to scale it up with the quantity dot sum so it's as simple as taking the measure multiplied by this so again going back to what maybe was saying so if i do a tab here you'll be able to see the auto suggestions on what are the available values that you can use okay so in this case i'm going to use the value p and l vector dot value sorry and then just to move on again let me just collapse this so that we can see it properly okay so now now that i have my skilled p and l vector i'm going to perform an aggregation okay so there's two dimensions here so on the scope level the instrument code and book id right and anything that you carry below this two level right i will take the value of my skilled p and l but for anything above it i will perform an aggregation that is a summation so what do i mean by that okay so let's do another visualization so now this is my original vector and then this is my skilled one so let's look at a simple number like for example this one it was negative and when i multiply by the negative one quantity it becomes positive right so at the instrument code level which i stated it under the scope right you can see the value is exactly the same but now what i have is i have a summation on top okay so this is where the aggregation function kicks in right so i have a value on top but we don't really want to work with vectors right i mean it's not really readable you know we can't really use it so what we are going to do now is we are going to create value at risk okay that's the main purpose here so for this value at risk i'm going to quick use the the array function quantal to take the 0.95 percent d95 percent tau of my position vector okay so now i can actually do a visualization so let me just drag this a little bit okay so then again i have my instrument code and then on my p and l i have my vector and then i have my var here so you can see immediately it appears right so if i were to go back to my my dashboard here you'll be able to find the var as well right so whatever you do on the Jupyter notebook side right it will be available on the dashboard side as well okay so your end users will be able to see it immediately okay so now i actually have my value at risk and then i could actually let's say for example i put my book id because i want to view each one by book id you can see that here when i sum up all the instruments under my book right the total value should be 101 and 927.09 uh 309 sorry but however the top value here shows 47,080.63 why is that so okay why because we are summing up the vector and then we are taking the 95 percentile of this vector okay so this is something we call non-linear aggregation so you can actually decide right what formula to apply at what level and when you query it actually change based on your query okay and everything is actually computed on the fly as you compute as you as you query them so this is something different from OLAP because OLAP actually you have to perform the pre-aggregation first and then put everything into the the LDAP right into the OLAP right sorry and then when you query it you have it but here we define the formula and then as you query it then we compute it on the fly okay and not to mention that we actually support incremental data loading which means that when you have new data coming in i could call the load function load it into the table and immediately you will be able to see it in the data cube without having to restart it so in the typical OLAP situation right you will have to restart your data cube okay so finally last portion here which may be interesting for you so you can see that here we have the formula let me just collapse this a little bit we have the formula that we are taking the quantile array the the quantile of the array by 0.95 so i'm going to create a parameter simulation on this 0.95 so in this case right i create the this simulation where i create a measure called confidence level which i defaulted to 0.95 i call the base this base scenario as 95 95 percent okay so then if i were to query it okay so now it shows that i have this measure which is value 0.95 i could actually output this to a data frame and then i could actually query it along some other levels as well let's say for example i have um let me see bulk id okay and then i can also have my let's say instrument code right so if i output this df.head so actually you can do your measure computation aggregations you know into a pivot table you query out and you can output it downstream you can do further computation you can merge it with other data you know create another cube again so the imagination is yours what you want to do with the data okay and now to go back to our initial formula okay so now earlier on i use 0.95 in this definition so now i'm going to overwrite it with my new parameter simulation okay so with confidence level that is defaulted to 0.95 and then let's do a okay let's do the visualization again so you can see that now i still have my var and then i'm going to create two more simulations called 90 percent and 98 percent so with the value 0.9 and 0.98 for instance okay so finally i will be able to visualize them side by side in this manner so i can actually have my computation with the 90 percentile 98 percentile compared against my 95 so if you look at the editor here what i simply added in is the simulation hierarchy here confidence simulation okay so basically the small program that i created is just to facilitate users to quickly do a analysis around a single csv data source but the library itself is not limited to that you can actually expand on it depending on what you want to do so you can do things like this example so now right actually the key point is that with the vast amount of libraries out there actually you don't have to code everything yourself it's just a matter of imagining what you want the angle that you want to achieve finding the right correct libraries put them together you get a new product okay so if that i end my session any questions yeah well so the limit is your machine size so yeah so the hardware space that you have the RAM that you have in your machine that limits how much data you can load into the the cube etc so by default i think we set a limit to use about 25 percent of your hardware RAM but you could actually adjust this using the java options okay so in fact if you put it on the cloud then you could actually scale it according to your needs as well yep then yeah that's about it wow so really i'm not really a power bi users but i know the sharing is a little bit more difficult because you actually i think if i'm not wrong you have to host you have to engage another product of power bi to be able to start sharing right you have to host it or something but here yeah exactly but here you know like in this i i'll show you how i share it just now right so long as you host it somewhere you know that other people can use even on your intranet if someone else can access your machine they can use the ip address to access your dashboard well so that's where the paid version comes in so basically it's without security right so if you want to implement security like lock in access and then who can actually access the various data set then you have to go for r30 plus which is the paid version then you will be able to you know connect it to your l-deb or you know if you have oidc you know you can actually have an authentication mechanism implemented in it so then your users can actually lock in and then based on the roles or user group right you can so you can choose you can actually configure whether they can access certain files files certain folders certain dashboards or even up up to the data layer you can actually say that team a can only access country a team b access country b you know that kind of possibility is there once you have the authentication mechanism but i think by itself the free version you can do a lot of things everything that i've shown you inside is available in the free version actually yes yeah so yeah so you can actually find this use case right in the github gallery so under the you can go to github.com atotie slash atotie slash notebooks then all the use cases are here and then in fact right if you expand it into the notebooks folder then you can see we actually have a lot of use case the ally industry is contributed by a user and then the rest you know like we created it on and off so the auto cube is the one that i have just demonstrated to you the main advance and the main so it's step by step guide so actually you will be able to follow through it yourself and then in fact right if you are more interested about that you know you can even go to medium dot atotie whereby usually each use case right we try to create an a corresponding article to explain how it works so then you know you should be able to learn and pick up the tool by yourself yeah so you can see the different ways to secure your atotie session for instance you said you wrote a rapper a python rapper around your libraries yes those libraries did you use with reflection and how did you like meet the bindings between islam and i won't be able to tell you i'm not from r&d i would love to tell you but then sadly they think that i'm too talkative so i'm in the evangelist job and not r&d it's not open source so it's like what um female yes exactly so but the free version at the moment at least i think in my perspective you can do all kinds of aggregations already so the catch here is of course in the oiler is that normally we say that you could only have one builder and one reader meaning that you can't really share your dashboard with a lot of people you know we set some limitation here so initially when we first started out we target the data size domain because a lot of time we find that data scientists will find this very useful in prototyping exploring their data you know build out the model etc and then to run simulation as well so every time you have a machine learning algorithm that outputs some values the question is how would business find value in the prediction so then you put your prediction into atotie where you already configure the business kpi you create it as a scenario you show the users the business users what is the original value what is the predicted value and then when you have the actual value coming in you could show what is the actual value itself right so all once we have set up the model itself you just need to fit your machine learning prediction into the system so likewise for financial industry you know you have your own risk engine you know multicolor simulations etc you could actually load it inside run through the business kpi and see how they differ from you know one another the kind of idea yeah the dashboard so theoretically in the dashboard you can make it as a report right now my question was like if for example the data has changed is there like a refresh point you are going to run through the so to just change the previous you already have set up the dashboard it's just i mean i'm looking at the perspective of report yeah so basically the idea is that uh because atotie support incremental data loading right which means that you can actually add data on the goal without having to restart and then your users will be able to access the data immediately when they query it so if i were to go back to the dashboard itself here okay let me go into presentation mode notice here there's a small icon here you can actually turn on real time mode so meaning that as data come in right your query will refresh and then you will see the latest value on the widgets so you can control them widget by widget to show the data real time but depending on your business use case because not everybody require real time because it's resource more intensive right so anytime that you you want to get the latest data right you could actually just right click and refresh the query you get the latest data so then it's a matter of how you organize your data so for example in bank you know you have your s of date day one day two day three day four and then you just have to set the slicing so that every day you just see a single day and then you can order them so that you only see the latest date for instance and then when you want to see the previous data you can always switch your you know you can play around with the quick filter for instance or the or here even you know in the filter editor you have like page filter dashboard filter this widget filter that you can actually apply so that you can look at different data or date range for instance i would suggest some of you have more questions just come forward and and ask we found it directly maybe the rest of us who doesn't have questions should finish up the pizza i think might also need some time to pack up his stuff and it's quite late already so yeah once again we thank you for your talk very nice and like i said if you have more questions just walk around ask people and help me finish the pizza