 And then at 4.45 we'll start one hour of poster session. So from 4.45 to let's say six o'clock poster session, and then 6.30 you are all invited by a sponsor dinner from IAA and ICDP. Okay, we had a very heavy lecture previous one. So in this one let's just relax and listen to something that we are currently developing at the IAA. We call it a digital nuclear reactor. It's a data storage toolkit. In the course of the presentation I will explain the philosophy of what we are developing, why we are developing, what will be the uses of this, and eventually we'll make a call if you want to collaborate on this project, we'll be very happy to have you and to join this and collaborate with the IAA in the development of this toolkit. Let me set up the premise here as this will make it very clear that what is the main goal of this digital nuclear reactor, this data storage toolkit that we are developing. In the IAA as it was explained by Vladimir in his very first presentation that we have lots of coordinated research project, which are mainly modeling and simulation exercises. And those of you who have experience in benchmark size or modeling and simulation exercise, you must have seen that you receive the specification document, and then based upon the specification document, you create the model either in a neutronics code or a thermo hydraulic code or structure analysis code or whatever analysis you want to do. As an example, here you can see on the left side that these are the different CRPs that IAA has conducted or is conducting now, it's on EBR2 which is complete, BN600 is complete, Phoenix is also over, and we are starting these two new CFR and FFTF. You will see that the end of every CRP, we publish a document which ideally should have all the information to replicate the exercise. And that's also one of the purpose of IAA to make this document open and public that in case you want to repeat the exercise and do the benchmark, you should just take the document and should be able to do it. Whether people are successful in doing that or not, we don't know yet, it has not been quantified. The idea is that if you are able to store all this information that is required to do the benchmark exercise in a digital format and then we provide it to the participants of the CRP or to the people once the CRP is complete, they should be able to extract all the data and be able to perform the modeling and simulation exercise. Taking this idea into mind, the core concept here is to create a data storage system or a data storage toolkit which will be a universal data storage toolkit, unit storage plus it will have some other features which if and when developed could also support that database. So ideally at the heart of this will be a database and then more peripheral functions could be developed over it. So as a first step, everything should be, we should be able to store in this storage toolkit and then as a second step, we can provide this to the participating organizations during the research project plus provide this to all member states after the CRP is complete because all the data or all the activities that IA does, they are open to all our member states. Then as a third step, once they get this toolkit, now it's up to the participating organization or the user to connect it to their codes and perform modeling and simulation exercise. Currently, this module is let's say under development and it also depends upon what kind of code you are using. So to summarize, it's a complete knowledge preservation and ease of access. That's the main idea behind this. Okay. Moving further, the objective is to provide a universal data storage system and all inclusive database with necessary reactor details required for modeling and numerical simulations. The features which are currently developed or should be developed is that we should be able to store reactor data in a hierarchical manner. I will explain this structure in the coming slides. It provides access to the stored reactor data and standard interface for coupling with the reactor simulation code. It should bring simplicity in terms of code coupling. There is already a module developed for the visualization of the reactor geometry which makes it very easy for troubleshooting or finding if there are any errors in the, let's say in the input deck preparation. It also already has some basic meshing possibilities. Maybe I will explain that also later. It can do some simple thermal expansion calculations which are really become very important for Newtonian calculations and other features which I will explain also during the presentation. So to understand the structure, here you can see that at the heart is the database and then the box is in yellow is something which is already developed or is under development and the box is in gray is something which the user can always develop short interfaces and couple it. So the database could be coupled with any of the reactor thermal analytics code but the interface has to be developed. It can be coupled to neutronics, fuel behavior or structure mechanic codes and various other utilities could be developed over it to support modeling and simulation exercise like visualization, service utilities that are explained like meshing, thermal expansion etc. And we can also have a utility which can create directly an input deck for a particular code which is the current nature of work that I am trying to do. Now let's try to understand how this data storage toolkit is made and what is the architecture behind it. The format that has been chosen is the hierarchical data format which is an open source data format and the beauty of this format is that you can store data in an abstract manner. So whatever idea that you have in mind to store the data, you can store it in that format in the form of groups and data sets. We will see in the next slide how this is done in our particular case. And it can also be used to store and organize large amount of numerical data which is also another requirement for us that we always have the quantity of data that we have in our exercise is always huge. And it's compatible with different programming language so it becomes agnostic in terms of operating systems as well as in terms of different programming languages. The current toolkit is developed on Java but the interfaces could be developed in any language that you like. Data could be stored in multi-dimensional space and be organized in an abstract manner as I said. So here again is the database. The database connected to the graphical user interface which we call the user control center. I'll show you some images of the user control center and then we could link it directly to different codes. Here is the structure of the database. So there is a root and then there is a setup file that is created and the data storage is actually based on the structure of the reactor starting from the pin to the assembly level or let's say starting from the pellet to the assembly level. So an HDF file which is called as an object can be considered as a container. So it's a container which in terms of HDF file is defined as a group that holds a variety of heterogeneous data objects called as data sets. So here you can see the legend that all these files are groups and then we have different data sets. So you can see here that we can define different sub-assemblies like fissile, blanket, control rod, et cetera. We can define different pins in different data sets and combine them in one group. We can also define let's say the actual level of the different sub-assembly which are called as the sub-assembly setup here. Could be fissile or a blanket or a control rod. And then all these groups are combined together to form a setup file and then it can be arranged to form a complete core map. I will show you in the coming slides how with the help of this simple data sets we can form a complete reactor core. We call it a core setup file and also the core mapping file. So I call it an intelligent database that understands the design philosophy of a reactor. So it will follow the very logic of the nuclear power plant or a reactor. So here you can see that starting from the pin to the whole core how the data is arranged. So in the database there is a possibility to define till the level of the pin going from that to the assembly, assembly to the fuel pin, to the fuel pin to the assembly and to the whole structure of the core. And the other benefit of organizing data in this format is that for example, if you want to change one parameter you don't have to change it in the every code that you are using. As I said the idea is make a central database and then connect different codes. You have to change some property or some geometric or physical property of some part of the reactor. You just change in the data set or you just change at one point or one particular place in the database and all the changes will be reflected in the whichever codes you are using. The toolkit is also designed in such a way that it links all the data sets through a common file. So the other benefit is that as everything is interlinked so one change will again be reflected to the whole database. So that's what I say. The idea is to make an intelligent database. As I said, we have already developed integrated user control center over it. It aids in visualization and provides a graphical user interface to its applications. The reactor setup file could be made with the user control center and the graphical viewer. So there is a graphical viewer where you can input the data and create the data sets. The structure of the setup file is made as explained in the previous slide. As I said, through all the levels of the reactor as you understand the reactor yourself. And then the data is broken into meaningful data sets based on the parts of the reactor so that the changes done at one part could easily be transferred up the hierarchy. Here you can see that just to give you a glimpse of the control center, how it looks. It starts with a basic set of template and that could be modified. So we have already defined few geometries and few data sets. At least for the fast reactor systems we can derive more geometries from this basic data set. There are other geometries which I am currently developing. So as to make an holistic approach that all the basic geometries are there and the derived geometries could be obtained from the existing data sets. However, it is very flexible and you can develop your own geometries in this as well. Derived ones are easy but if you want to make a completely new type of geometry you have to go to the source code and make the changes there. Again, the data input is as simple as an Excel file. So we have an HDF file viewer which is embedded into the user control center. You can simply type the data in this as you can do in an Excel file or copy and paste the data from the Excel file. Usually we get lots of data in the form of Excel files when we are doing the benchmark exercise. You can simply just copy and paste the data in this. I think I will explain also that what kind of data we can put in here but here also you can see that you can already define the, as it is the case of a sodium cool fast reactor, you can define the pitch, the wire diameter, the wire angle, the number of pins, the width, the canwall width, the thickness of the canwall. So there are different parameters that are already predefined in this and you can just change it as per the requirement of your model. As an example so that it makes it easier for you to understand, I will take the example of the new CRP that we are starting, the Chinese experimental fast reactor. I already tried to make the model in this and as a first step before starting the CRP, let's say I call it pre-CRP where we got the technical specifications from China. As a process or a review process, we have to go through the specification in order to make sure that the specifications are correct. So what we do is what I was assigned or let's say what Vladimir suggested that I should try to make this reactor in our data storage toolkit and see if there are some mismatches in the geometry or if there is something which is missing and we were able to find that. So we stored all the available data in this digital nuclear reactor storage toolkit. We evaluated the available data provided by the CIA, which is the primary organization leading the CRP. And then we requested CIA to provide more data and clarify for the benchmark exercise. It was very useful work and then distribute the data to interested member states for their use once all the data is connected in the storage toolkit. Post-CRP, as I said, the main vision is that we preserve the data and distribute it as requested by the interested member states. So that's the long-term vision and the long-term goal of the data storage toolkit. So let me just show you this example and this will also help in clarifying how the toolkit actually works. So here is a picture that we got in the form of let's say paper specifications that was given to us. So this was the core specification given, the dimensions of the assembly and fuel pins. Let's say a simplified structure of the assembly as well. And then what are the different types of sub-assembly and how it was defined in the specification documents. Then there were some geometry, which was a geometrical data which was missing. So it was derived and it was not clear that if some of the geometry is expended because of the increase in the temperature or not. So this was also a very useful exercise to go understand. And then as a step one, as I said that, I used this sub-assembly geometry data and created a ZFI files with a simple user interface which is there. As a first step, I defined the actual distribution. So here you can see that all the actual heights are defined for different sub-assembly type. You can define any type as you like. So but as it was like a gas planon, then a fertile part, then a fissile part, then again a fertile part. And then stainless steel parts. So I defined different actual heights. Then you can define each segment as a step two. So here the segment was defined that how many pins are there, what is the width of the canvall and everything for each actual geometry, let's say. And then as a step three, we can define the pin geometry and material. So here the geometry for the pin and the material for the pin is also defined what is the pellet and things like that. After this is done, a simple, let's say the geometry that was given to us by the, in the specification documentation was converted into this which makes it more clear how the geometry looks, how different segments are in one particular sub-assembly and what is the structure of each segment of the sub-assembly. So fuel pin structure and material is also defined via the setup file. Currently we have hexagonal, cylindrical or rectangular geometry with the available templates and the isotopic composition for all the material could also be defined using this toolkit. So just to give you an idea that the diagram that was the specification geometry that was given to us by in the specification document was converted into this which makes it much easier to understand the structure and the geometry of the core. And also here it makes it very easy to understand if there is any mismatch in the actual level of different sub-assembly or let's say for example that this one shows that the control rod is a little bit inserted in this. So it shows that maybe at the startup or at this particular time the control rods are inserted and should we do the modeling based upon that or is there an error in that. So these are some things which help to clarify before starting the CRP. Just to give you an example again that when I was doing this exercise, so I was able to understand that how the different segments in the fuel assemblies are defined and how the structure is. And I was also able to find a little bit of mismatch which is now corrected. So this helped in a little bit let's say improvement or evaluation of the data that was given to us. And this is important data because if we make an error at the starting of the CRP, the results might be completely different. And this also helps that because when you take the data from files or let's say text files and try to make your model. And if let's say for a CRP where we have 25 to 30 organization participating in this, there might be an error in the way they interpret the data. This also helps in reducing let's say the uncertainty which is or the user error which is relevant or related to the factor, to the fact that the input data could also be wrong if it's done differently by different organizations. So it helps to reduce one parameter of uncertainty if we give this in a digital format where everybody will be using the same set to derive their calculations. However, we have to also make sure that this data set is accurate. As a step two, we can arrange all the assemblies into the code to create the layout. So here you can see that it's a simple structure which I will explain in the next slide that how the whole core map could be created. Once we have all the assemblies which are defined into the data storage toolkit, we can give as you can see here a ring from and count how many assemblies are in that ring. How do we do that is you can see here that the black number gives the number of ring as most of you who have been doing these simulations or modeling for fast reactors, we define rings as the each outer layer of the assemblies. And then within each ring, you can define the numbers. So let's say if I am here, I define this ring as ring one and the assemblies are one, zero, one, one, one, two, one, three, one, four, one, five. If I have 20 assemblies, I will start counting from here and keep on going to the next ring and the next 20 assemblies will be of this structure. So it's very easy that through only one file, if you know the count of the assemblies, you can just keep on giving that number. For example, here, you can just give the count of the numbers from which ring and how many assemblies and it will automatically create the whole core map. And again, this helps in visualizing and also reduces the uncertainty or error that you can do in the input making because now you know the exact position where you have modeled the assembly. So here, this slide shows that how the complete core map was created. Each and every subassembly was defined all through the actual lens, their particular geometry and eventually I was able to create at least the inner core of the reactor. The toolkit also gives a possibility to have visualization in axial directions in north-south or east-west or xy-plane or yz-plane as you would like to call it. And it also helps to understand what are the different elevations in. It was very interesting to see the layout of Phoenix. So we also already did this exercise with Phoenix. One of our interns from France, she was trying to convert one of the CRP data from Phoenix end of life test and make a digital storage toolkit for that. That is almost done. So we have that data in the digital format and the core layout of Phoenix, you will see it's very interesting to see. There are additional features, as I said, meshing, so you can create the mesh which can then use to calculate the expansions and different things as you would like to see or if you would like to connect to different codes. This is just an example to show that how the expansion was calculated and the mesh was refined as per the new expansion. So in summary, the digital nuclear reactor is an easy to use data storage toolkit that can be used to input all necessary reactor data for a benchmark exercise. Usually, we give it to the CRP participants that they are free to use the data and can then provide to them. And also, in case you are also interested to know or learn more about the toolkit, I'll be happy to interact with you. The interface to connect different codes can be developed by the user. The tool can be used to assess if enough data is available to carry out the CRP, thus assuring that the availability of data before the kickoff meeting, usually we have a kickoff meeting and this helps in assuring that we have enough data for that. We'll reduce the risk of incorrect data input by the participants. We'll lead to long-term preservation of benchmark data and later the results after the post-processing module is developed. So there is an idea to develop a post-processing module where we have all the results collected and then we can also visualize the results that are calculated after the benchmark exercise. The data can then be distributed to interested member states after the CRP is completed along with the technical document. So one of the idea is that once we publish the technical document, we also distribute this toolkit along with the tech doc. It can also be used for educational purposes. We are still exploring that option that maybe we can do some simple calculations and simulations. That was also the idea to have one exercise based on this toolkit, but I think I was a bit lazy and I didn't prepare the exercise for this toolkit and otherwise also it would have been a lot of work for you guys, so you should be happy about that. Future work is that we are gonna develop interfaces to couple different codes. My idea is to at least make an interface for serpent code, which is a very widely used code now for neutronis calculations and then perform simple calculations for thermal expansions based on the thermal expansion provisions provided for the CFR CRP or the FFTF CRP which we'll be conducting. Develop other modules for post-processing, make necessary changes to the graphical interface for better user experience. Of course it's in development mode so there are some bugs and errors that we have to do that and then idea is to also do the similar exercise for FFTF ULOF CRP that we are starting soon. I would also like to acknowledge KIT for the advice in the development of this toolkit and that's all from my side. Thank you, thank you very much.