 My name is Pubale Mukherjee. I'll be talking about the case study of application of weather, application of weather research and forecasting model in WRF on public cloud infrastructure AWS. WRF or weather research and forecasting model is the most popular numerical weather prediction model available right now. And it is extensively used for operational climate research and weather related researches. But being an earth science model it's inherently very computationally intensive and very data intensive. In fact, when you're trying to run the model at a very high spatial resolution and conventionally for the past few decades is the such earth science models or WRF is simulated on a conventional HPC platforms. Now these HPC platforms have come with their own set up limitations. First of all, they are very expensive and in terms of setup cost and maintenance cost. And at the same time when you submit a job to run the simulation it goes into queue and the time taken to complete a single simulation is also very high when you're planning to run the model at a very high spatial resolution. So this is exactly where the user public cloud infrastructure comes into play. Since these public cloud infrastructures like AWS is the user only has to pay for the time period it the user is planning to run the simulation. So the cost of running the simulation also goes down. There is no cost of maintenance. So it gives the feasibility to the scientific community or rather researchers as to simulate the model at a very high resolution at a much lower cost which is a very advantageous point for academic groups with the funding crunch. Now the proposed problem statement here is to do a comparative analysis between two simulations. One would be the conventional method of simulation using one month's spin up time for a two one month's spin up time given to a model. Another is using a proposed Windows splitting technique. Now what happens is when you're planning to run the model WRF model for one month time suppose you are planning to run the model for July 2018. So you have to initiate the run from June 2018 and run the model continuously till July 2018. So one initial one month of the one month simulation is rejected at spin up which is the time given to the model to stabilize. So you can only use the simulation results from so that you can only use the simulation results from July 2018. Now so a lot of time is being wasted in order to give the model a reasonable amount of spin up time. So what we are proposing is a Windows splitting technique which essentially means the entire simulation period that for example from June to July 2018 two month period will be divided into 12 10 10 days Windows with each window having a 50% of overlap time with the previous one. So for each each window the user has to launch multiple easy to spot instances is which also adds up to the manual labor in order to ease the entire method. What we have done is we have automated the entire process of of your instance launching in through a newly launched technological terraform. What it will essentially do is is the script it will launch the instance and within the script there will be instructions to install them or automating the installation of the model running the model and ultimately redirecting the output of the entire simulation to an S3 bucket. So what happens now is that the this two month simulation which would conventionally take in an HPC about seven to ten days of time depending on the time period. It's so approximately if to run on two month on simulation it would take around seven days it can be reduced to only three days with its such code optimization and fine tuning of the model and of course the model has to be set with correct egg parameterization schemes. So this is all that I had to say thank you for listening to me and if you have any questions please you can redirect it to me. Thank you.