 Hello, my name is Nicolas Krus and I will be the voice behind this video abstract of the paper, Online Optimal Experimental Redesign in Robotic Parallel Fatpatch Cultivation Facilities. Liquid handling stations or experimental facilities can now perform a large amount of very complex experiments including fatpatch cultivations and chemostats. So the question is now, which are the experiments that we have to carry out in order to generate the highest amount of information to achieve a fast and cheap characterization of our strains, which is so important for high throughput bioprocess development. What we do is we try to use the information that we have in microkinetic growth models to have tractable mechanistic models, which are constantly fitted to the data generated by the robot through a recursive parameter estimation process. And we use this model to design the next steps of our experiment using methods for optimal experimental redesign. Because of the number of variables that the optimization program has to handle, we use a moving horizon approach for the design of the experiment and this is why we call this sliding window optimal experiment and redesign and the acronym is SWORD. As a case study, we had fatpatch cultivations in eight parallel mini bioreactors with four different design strategies. Our strain was Esquerichia coli wild type W3110 and the experiment was six hours long. By this we end up with a dynamic optimization problem with 288 decision variables and the model that we were using was an ordinary differential equation system with six state variables. We had all together 25 parameters to fit since each reactor has a different KLA and it's also process dependent. Since we know that at least at the beginning of the experiment we won't have enough data to fit all the parameters of our model, we need to regularize our parameter estimation problem and we do this by using the subset selection method that has been published by the Analopes and Tinmal parts and through this we assure that we have in every step a well conditioned optimization problem. Since the parameter estimation has been computed we use this model that is now closer to the real system to redesign the next steps of our experiment. By this we are using the data as it is being generated through the experiment to adapt our strategy and get closer to the optimal experiment. As a result we have a computer based framework that is not only finding the most informative experiments in parallel dynamic systems but it's also learning from the data that is being generated and adapting itself to the system. Here we can see the monitoring station and without going much into details we see how the model the straight lines don't fit the data, we do a parameter estimation, the prediction is better and based on these better predictions the next steps are selected. We were also able to show that as it would be expected compared to the design that would have been carried out with the initial parameter estimates which we know are wrong, the variance of our parameters is much lower due to this constant redesigning during the experiment. In other words, since we were adapting our experiment using the measurements as they were being generated, we ended up with a data set that had a higher information content and allowed a more accurate identification of the important dynamic characteristics of our strain. Finally I would like to thank all the persons, institutions and companies that allowed this project to be carried out and of course you for watching this video and hopefully reading the paper and maybe contacting us with interesting questions and ideas. Thank you very much.