 our future vision. So that's why I'm working with the team currently at the moment. Also, I'll do a call out for the user survey as well. That's important for us to get feedback through that as well. So that's still going at the moment. And Joe, when does it finish at the end of next week? The 4th of December. So everyone should have got an email asking them to complete it. And there are reminders going out every week if you haven't completed it yet. So if you could, your feedback is really important and valuable. So we can shape the Nectar Cloud services in the future to better meet your needs. Excellent. Thank you. So I won't speak for too long, but yes, very excited to have joined ARDC to be working with the Nectar Core Services team and the Cloud program team. And I'm very keen for your feedback. Thank you. Okay. Thanks, Carmel. I'll get started. Let's see. All right. So just a very brief background. So as you hopefully, and apologies in advance if you saw my talk at e-research conference because it's a pretty similar Kinects version of that. So the Nectar program, increased program was started in 2010. The Nectar Research Cloud came out of that started in 2011 and went into production in 2012. In 2016, the National Research Infrastructure roadmap called for a merger of the Nectar and RDS program. So that happened in 2018 with the formation of the Australian Research Data Commons or ARDC. So hang on a minute. Oh, sorry. I think the slide's gone ahead without me. Anyway, so it's been quite successful. We've got quite a lot of usage for the Research Cloud. We've got over 1,700 active projects last year, 8,000 virtual machines, 4,000 total projects since the start of the cloud, hosts many services, supports many of the increased capabilities and other platforms and so on. We had almost 3,000 active users, i.e. users who run a virtual machine last year from almost all universities in Australia. We're supporting over 300 ARC and NH and MRC grants and a few hundred other research grants. All 22 field of research closed. So it's a very broad user base and almost all four digit FOI codes. So the new stuff. So the strategy for ARDC has been initially we'll be refreshing the clouds or investing four and a half million dollars to refresh the Nectar Research Cloud, which is the first new capital investment that we funded for quite some time. And our node partners are providing at least matching co-investment to operate that infrastructure. And we'll be providing another three million dollars a year for the next three years to for new infrastructure and services. So we want to expand the capacity of the cloud, particularly with a focus on supporting ARDC priorities projects, including the platforms projects. A new round of platforms projects has just been announced today. If you want to look at those and also prioritize support on sort of innovative leading edge infrastructure like things like GPUs or very large memory servers are quite expensive things. So where are we? We are in the process of doing a refresh of the Research Cloud now. The UNI Melbourne refresh is finished. The TPAC new infrastructure will be available next month. Monash after that and Intersect Inclusive have been delayed a bit to May. All of this has been delayed a bit due to COVID, as you might imagine. We're hoping to have all this up and running by now, but there's been a little bit of delay to that. The NTI node will also be decommissioned at the end of the year. They're going to be focusing on their own cloud infrastructure. So we'll have a significant amount of capacity. The capacity listed there, the new capacity is solely to support the sort of nationally prioritized project allocations. They will also obviously be the nodes themselves are investing in infrastructure to support their locally prioritized projects. So as well as that refresh of existing infrastructure, we're also putting in $2 million this year of new infrastructure to support the platforms projects primarily as the focus there. So we've gathered requirements for those projects. Much of the infrastructure will be supporting the sort of leading edge, you know, expensive equipment, GPUs, large memory servers and so on. Plus some additional just sort of standard capacity to make sure those platforms have the capacity that they need. We're aiming to have that deployed before June of next year. And as I said, we have additional investment for the next couple of years after that for more infrastructure and services. Other things we'll be looking to do. So we've got a new skills specialist role of position starting in January next year. We'll be looking to improve the tutorials and our training and online information and running webinars and things like that in future. This user forum we're aiming to continue and to have other sort of more regular meetings where the technical people using the cloud can talk to the ARDC core services staff and node staff. This year, this financial year, I should say, we're looking to deploy some new services of virtual desktop infrastructure, Jupyter Hub, so run Jupyter notebooks as a service project to look at integration with commercial cloud. And then so some general improvements of Dashboard and Murano App Store and so on. One important thing though, again, which we always do is try to incorporate new functionality that's coming along from OpenStack projects. So things at the moment we're looking at are things like preemptible or spot instances, reservations for virtual machines, new supports coming along for accelerators like GPUs and how those are managed in the cloud. So that's all happening over the next few weeks, sorry, the next few months of this financial year. And I think that's about it for me. So I will stop sharing my screen and I'll take any questions that people might have about any of that. Thanks Paul. People can either unmute themselves and ask a question or put it in the chat and I forgot to say that obviously we can collect questions as we go in the chat and we'll handle them at the end as well. If there are any quick questions for Paul, we can do them now. Otherwise, if you think of one, as I said, put it in the chat and we'll do it a bit later. And there may be some that arise during the breakouts, Paul. So thank you for that. What we'll now do is I'll ask Avi and James to talk to us. Avi is the technical leading and solutions architect for the Echo Commons platform and James works with him as web designer for the Echo Commons platforms, both at Griffith University. So I hand over to you guys. Thank you. All right. Thanks, Riz. So hi everybody. My name is Avi and as Riz said, I'm the tech lead on the Echo Commons platform. And with me today to present a little bit about how we use Nectar is James Lee, which is a web developer and designer with the Echo Commons platform as well. So today we're going to go through two main parts. One is we're going to go through how we currently use Nectar with DCVL, which is the current service that we are now responsible for that's already running on Nectar services. And then we're going to go through how we're setting up our structure or infrastructure on Nectar for the new Echo Commons platform, which we are in development of at the moment. Thanks, Avi. So just an introduction to PCCPL, if you're not aware of what it was, what it is. It's the biodiversity and climate change virtual laboratory. And the goal of that is really to enable researchers to investigate, explore and accelerate biodiversity and climate change research. And it does this by providing a single online environment for accessing and visualizing climate environmental datasets and to run biodiversity and climate change modeling on behalf of users without users having to actually write any code at all. So all they have to do is just point click, fill in a few information into the form and off it goes. And PCCPL has been happily operating on Nectar since 2014. So the key parts of PCCPL, the application itself, it's workers and the data that we have to handle and capture. So for the application, that's the website itself. And also we have a data visualization service. So things like a map server, which renders all the geographical data that we have. For workers, they're responsible for running the modeling experiments. And they tend to have a fairly high CPU and memory requirements. And they also do spend quite a fair bit of time on various different models, sometimes from minutes, hours to days. And for data, we accept both user uploaded data and data that we import from third party services like our partners at ALA. And also the modeling outputs that get generated from the experiments. So how Nectar meets the needs of PCCPL. Currently, we use three major parts of Nectar. So we've got the virtual machines, data volumes, and the object store. So for our virtual machines, they currently have our application running inside of them. And also a number of them, which are the individual workers. And most of them are RAM optimized instances, because we do have, again, those high RAM requirements. And at the moment, they're working pretty well. We've recently actually migrated them from NCI, as explained before, where NCI is shutting down, to Tasmania with their TPAC node. For our volumes, we store application data and also serve scratch base to each worker, which is roughly about 100 gigabytes at the moment per worker. And that is responsible for just to temporarily hold data when the models are executed. And finally, for all of our data that we do actually store long term, that's stored in the object store. And so that's things like the data sets and the modeling outputs from the experiments. And at the moment, we're holding over three terabytes of data on behalf of users and the data sets that we are responsible for. So all of that supports 6,000 users, or more than 6,000 users from all over the world. And over 110,000 experiments since 2014. So we think that Nectar has been able to support BCCBL quite well over the last six years. And now I hand over to Abe, who will describe more about how BCCBL is evolving into the new project for people comments. Perfect. Thanks, James. So as James just mentioned, we are currently moving BCCBL, or we just moved BCCBL from the NCI node into onto the TPAC node. So that's a necessity since the NCI node was shutting down. But in the process, we, you know, we got the chance to actually do some improvements to the overall structure of the stack while containerizing all elements of the stack. So everything on BCCBL now is running through Docker containers and in a little bit more manageable format before we actually move the features into EcoComments. So James, if you want to go to the next slide. So what we are developing with EcoComments is what we want to be is the portal of choice to analyze and model ecological and environmental problems for researchers. So we want to take all the things that have been working with BCCBL and in addition EcoCloud. And we want to bring that functionality in a more generic format inside one platform so that we can have more ability for reuse and interoperation to other services and other platforms. And also the chance to actually expand to different research domains such as like BCCBL, you know, has been focused on the climate change sector. We, you know, there is similar modeling needs, the need to run experiments in a similar way for other research domains like agriculture, biosecurity, which is, you know, part of the new platforms funding round has just got accepted as well. So this is really exciting to see how we can integrate with all these new research domain using all the good parts and what has been working for BCCBL so far. Did you go to the next slide James? You could think of like the role of how the EcoCommons platform would be. If you see on the left hand side you have model development and testing. Currently we have EcoCloud, which is also running on Nexer and has been running for about three years probably now on Nexer, running on Kubernetes inside Nexer. And that allows researchers to go in and have a pre-setup environment to use a command line interface to set up and start running their model and experiment and fine tune models before actually running it on proper data. And it gives them access to actually run this on a cloud infrastructure instead of having to run it on their own computers. So we have that on the left where people can actually explore the data and then that can be then evolved into more trusted models and data platforms such as the BCCBL where you can have a virtual laboratory for a specific research domain where you can have all these models that are kind of agreed on for this domain and available with access to data sets and data sources that might otherwise be really difficult to get to. And that can be used by this whole the research community and that research domain to develop a very easy interface for researchers to do their modeling. And the final step on this on the pipeline as you can see is decision support. So what we want to take the EcoCloud platform and what BCCBL has done for research is we take it to the next level and actually offer a platform where government agency or people with that's doing policy and decision making having designated portals for doing decisions of course. So the CSDM which is the collaborative species distribution modeling portal is a collaboration with different government departments going in and being able to we develop this as a proof of concept. So these uses from this agency can go in and share potentially sensitive data with each other and run experiments and share the results with each other. So this allows having the ability to build portals like this on top of the infrastructure we're building allows for better collaboration between at the moment you could for example have different government agency or you know say biosecurity researchers or decision makers in South Australia might use a different model to determine things to built in Queensland just because there is not enough tools for collaboration with this. When you have a portal that you can deploy on a national level and deploy individual models that are trusted and can be reused you can then ensure the consistency of modeling the same kind of experiments and for example species distribution model or risk mapping or things like that they can model in a similar way and you can have that consistency even between the different states and the different decision makers the different government groups and you can make a better overall kind of decision making process by having a similar data and using a platform that's secure that can securely have whole sensitive data run models on that and share the results within a selected user group. So we're trying to take eco cloud bccvl take it into a new platform that's going to be more generic but allow us to reuse a lot of the underlying functionalities of those systems but then increasing the user base of this. So if we go to the next slide we will be using very much similar to what James mentioned before for bccvl so we'll be using virtual machines and that's going to be for the application nodes and the job workers that's going to do some of the processing and we also on the allocation at the moment we have the RAM optimized instances because a lot of the experiments do tend to require a lot of memory so this is you know some experiments might use even higher memory needs than what we can get from a normal virtual machine play results touch base or touch on that a little bit later when I go through some of the different parts of the platform in addition to the virtual machines use volumes and same as from bccvl store application data and also uses for scratch base for the worker machine so we can have the sometimes they have generate a large amount of data while they're actually processing the experiment so we need that extra space the object store is used for modeling outputs it's used for data sets and also for user data the functionality from eco cloud each user has their own user account and they will have an allocation there at the moment on eco cloud it's 10 gigabytes so we will when the user spins up a new server we will sync their folder from the object store into a volume that's mounted into the machine and they will be able to access the files they have there the new thing with eco commons is that we using kubernetes to orchestrate all our containers so all our services are containerized and deployed onto kubernetes clusters that we are currently using next there's magnum service which i assume jake is going to be talking about a little bit later in the tech talk so that allows us to dynamically actually create new clusters and we can run separate clusters within our allocation and easily deploy our services to the different kubernetes clusters the next slide so as i mentioned kubernetes clusters to run the services we use gitlab for all our cold repositories also we're using gitlab ci which is our continuous integration pipeline where we run through cold testing and security testing building the the container images i will touch more into that in the following slide and we use also at the moment we're using gitlab's container registry to push our images into it for our deployment of our services we're currently using flux cd flux is something called a github operators so to try to explain it easily we install flux in each of the clusters and flux then syncs back automatically to a repository that we have with all where we have all the configuration files for the services that we want to deploy for our cluster so we in the repository we could for example then have a dev a test and production folder and the configuration files for each of the services would lie underneath each of those folders and when flux c flux then the text of change it will automatically deploy the changed changed image if it's set up that means that we can for example for dev and test we can set it up to automatically pull the latest image deploy it and we'll have that ready and in production we can have a more controlled environment where we say only deploy these specific versions of the services so that we have a full control of what actually goes in into the to the various clusters all our services are now containerized our own services that are containerized using docker we're currently using nextjs and react to build our front end application or application services while the majority at least of the api services will be built with python and jango for the back end there might be this is what we just started the development now and this is what we're looking at those things might change we might have to adopt additional technologies as needs arise but that is like a brief overview of the technologies that we're using at the moment so if we go to the next slide i have a fairly high level overview of the architecture just to explain some of the core things that we're running in each cluster so as you see the the top kind of dotted square is a Kubernetes cluster think of it that we will have three of those for dev test and prod so you have some core services that you need to have the cluster running like things like certificate managers and monitoring services and then some core platform services like search index a queue a web client to browse different services but then we have the four boxes there in the middle row that's basically that's the four main parts of functionality of the platform so we have the analysis playground and this is what currently is eco cloud that will be renamed into this part this is where users can actually think of it as paul mentioned previously next there is looking to expand into getting like a Jupyter notebook service so this is our version of this where the user can actually log in and they get their own Jupyter notebook their own service up and running then we have the function toolbox and this is where we are going to take all from bccvl for example we're taking all the different models that people can run we're sporting into into individual functions and that means that people can then pick and choose which functions they want to use with their data and it also means that we can then amass a library or a function catalog of all these functions where the platform can be used in a much more generic way than just for one domain of research the data explorer is where users can explore the data of the system the the data sets are available so we have we're going to have data sets that are a curated set where we pull in the data and generate additional metadata and people can actually run through and see visualizations on the data we're going to pull in additional data sets through third party connections like the knowledge network or other external API connections so that users can then get the ability to discover additional data sets that could be relevant for their research very easily and then having information around how they can use or quickly use the data to how to download the data and how to use this when they're going to run in their experiments we also have one box the red dot there of user upload of data so part of the e-commerce platform will be to develop a component around sensitive data haven so this is where users can actually upload some part of their own data and that can be managed separately like they can select the privacy and the licensing settings on those on that data and it can either be completely private they can choose who they want to share it with or they can make it a market that's public then we have the result manager this is where we can allow users to look at the results from their existing experiments or their fixing jobs and then add on layers on top of it with from with additional data sets to for example you have run a species distribution model and then you would would like to add on additional layers like some infrastructure or planning some planning data sets so you can see potentially the the effects of that on on top of that and then generate additional results on top of your experiments so part of the the whole result and sharing the allowing users to share the how they got this result will be really important so that the users will be able to share okay I use this data set these are the parameters and this is the result it's created so then people could then come in and reuse that recipe basically and they could tweak parameters change the data source or do other changes and rerun it and get you know continue on the work from the previous researcher so that can be a really great feature for for people to actually get that continuous flow in the research and improve upon the the different models and the different works being done by different users on the bottom of this diagram we have the job manager so this is actually where the we do the function execution or the the compute so currently we have separated that out from the standard platform cluster and this is purely for scalability so that we can what we want to do is we want to add the ability to add multiple additional clusters where these jobs can be run so that could be run for example on a cluster of virtual machines whether it's on nectar whether it will be somewhere else potentially if we will be able to use allocation on hbc clusters um that this part hasn't been completely uh figured out how we got with especially with hbc but since we have separated out um we are at least have the ability to scale the platform's ability to run heavy processing projects or experiments without actually stopping the whole platform for the users we're taking kind of the the responsibility away from the application platform this way we can ensure that the platform itself is always responsive and quick to use and also doesn't take up too much resources in itself it's more the the heavy part will be done by the computation if you go to the next slide a little bit about our how we're working our development pipeline so we start from the left we have of course the everything starts with version control so that the code we have a developer and he then he or she commits some code it goes into back into the coder postory and then we have gitlab ci our continuous integration who runs a range of jobs onto this code so he does things like static analysis security tests units and functional tests and if this fails it goes back to the developer and they can try again until they get a pop once it builds successfully all tests passing we push our we build and push a container image into container registry and as i mentioned before with flux how we deploy things you see we have three clusters dev staging and production we have a flux controller in each of them so they will periodically check every five minutes they go and check in the version control are there any changes we need to do if there are any changes it goes up to the container registry and deploys those changes automatically so that's how we can then automate as much as possible of this whole development or using continuous integration and continuous deployment on nexter and and so far we've been using nexter's magnum service and so far it's looking good it looks like everything's working there are of course always things that are a little bit trickier to do in open stack as opposed to if you have them public cloud but you know the i have to say the nexter tech support team has been really responsive and me helping us with all the queries we had around getting things to work so we're really appreciative of of the support we're getting and we look forward to using nexter for for eco commons in in the years to to come now all right thank you all thanks so much we're running a little bit over time so in the sun it's got a really burning question five it can i suggest we put it in the chat and we come back to it thank you to you and to james for your talk and in the interest of time i think we'll just press press straight on sorry to do that to everybody but we will have a bit of time a bit later to catch up luka could i ask you to tell us about your application of the of nexter cool uh yeah it's a measure before i have a problem with sharing my screen on ipad so jake is going to jake is going to provide that support um yeah i'm sorry to press on and james that's no way a reflection of the quality of your presentation but we'll just keep moving and come back to the questions thanks luka okay so without further ado i will bring you through a journey while we have used nectar throughout the years so a different a different sophistication level levels of sophistication okay so this will be a tile of evolution oops i don't yeah thanks sorry yeah you're a bit too eager man first time you want you want going back to the previous slide okay yeah thanks yeah leave it there thanks so um so we started with manner provisioning of vm's and then we ended up with kubernetes clusters over the last nine years or so so and uh in keeping with the evolutionary theme of this um this presentation uh here are the different steps of the evolution of whales from terrestrial mammals that um um lived in swamps to the ocean going creatures that we all know and love do you mind going to the next one thanks so a few words about myself uh i've got 40 years of experience of their development mainly just special i've been contributing for the last 20 years or so to different open source projects i've brought some models in geotools geoserva stuff like that so travel the world work in a few places as you may see and eight years ago i was stranded on australian shores um working at the melbourne research group of the school of engineering and holding the position of data architect at the oring project which is what i mainly do so next guys please so a few words the australian urban information network is a project which was established in 2011 and is a mission to provide urban researchers for with data and illegal tools so we've got our main product is the portal um and from the comfort of your web browser you can go there shop for data from meteorogeneous data sources abs landgate are a bunch and then you pick up your data and then you analyze those data with the uh analytical tools that we provide store your results you can map them charge them stuff like that so we've been in operations for about nine years we've got 200 users daily but that clearly changes from day to day depending on the event flow of academic year we pick this year to 6 000 users in one day but usually we've got way less than that next slide please okay so we started eight years ago by deploying single VMs for very specific tasks and and using the dashboard the the nectar dashboard we didn't use documents at the time we didn't class the dbms it was very very very simple there is very simple times now since nectar at the time was somewhat unreliable we provisioned our we had our own hardware co-hosted in the same data centers of nectar and we used VMware as a virtualization software so we still we use VMs but our own kits our own not not a cloud yet but kind of a private cloud next slide please okay then about seven years ago something like that we started working with docker containers again everything was done manually using the dashboard and we deployed docker containers manually next we dabbed into an ad hoc client internal developed client that used the open stack and docker apis to automate this this process in our insight that was a bit of a of a data and technologically but it did what it was supposed to do i deployed quite a few applications with it but other tools were maturing in the in the meantime and so using having an internally developed client didn't really made more much sense after the initial the the few years so next please so we move to docker's worm obviously not not the the main or reporter but some activities way and still run on docker's worms like our api so you can have access to or in data sets using an api as well and that ran and still runs on on docker's worm and so the provisioning was done with heat templates so no longer a manual process and deployment of applications on docker's worm was done using composed files and the mix of shell strips actually the internally developed client was discontinued next so then one year ago along came magnum so some of the provisioning was done with heat into place but some of it using magnum set up a kubernetes with shell script and yaml file as usually so we started experimenting with databases a cluster database on kubernetes with enterprise message buses that sort of middleware stuff again we had the there were initial problems or kubernetes is not exactly easy and magnum as improved over the last year but initially was not completely reliable let's put it this way so next please now present what we are doing at present with kubernetes we have deployed the streaming platforms such as kafka functions as a service environment so using k native not with the service we have this is still prototypical stuff so we are experimenting with it developing prototypes is not yet available to the to the average or in user micro services as well with k with camel k which is the kubernetes port of camel apache camel and not me personally but some of the users some of our colleagues are experimenting with terraform as well especially for our continuous input integration continuous deployment pipeline so next please okay given all this technology what we what do we do with it um so the next story release will be totally a nectar so we moved away from from VMware which was a bit long in the food uh the the kid the kid was old um as i mentioned our own c i c d pipeline is on kubernetes as well we collect the vehicle of traffic this is relatively um new project and uh so to the tune of six gigabytes of um of day-to-day so basically there are eight million different readings of vehicle of traffic sensors store every day nothing that we do we collect social media posts from uh instagram and then twitter and possibly we will expand that in the near future we collect data from it original data sources or in dust so we are we are in the process of um rolling out and in a prototype that we'll use um juby da hub as a juby da notebooks and um a microservices and kubernetes and cany cany function of the service sorry for the acronym soup uh based service for collecting um collecting data in the portal you're important so please next so let me say that nectar has some great advantages as you have inferred by now we're being one of the first users of nectar uh i personally started using in 2012 when i came to to melbourne so and it does some great advantages okay so it is free well let's not discount the the the freedom of experimentation that comes with it i don't need to justify everything that i do to a manager um we would be the case in for a with a commercial cloud it does great people behind it i mean the when i open a ticket it is usually solved quickly and my tickets tend to be relatively complicated non trivial so usually they go straight to the third layer tier of technical support but again i can see that people are really trying hard to help me and i'm very grateful at that and nectar has grown more robust over a year as i mentioned into when we started using it 2011 2012 uh even the basic services like computer storage were somehow not a bit flimsy um and that those services have improved quite a lot over the years and that and these two um to that some of it uh but next slide we are still not quite there other than being over subscribed which is i think the tragedy of the common so cannot really be helped it um sometimes resources are tight uh we had crisis on the floating ip addresses or large vms you know every now and then there is a there is a problem with provision resources there are there were software upgrades that lasted hours sometimes the network grade that was a year ago that lasted one or two days open stack is not really reliable my experience with magnum has been mixed again as improved but still we still face um issues with stock volumes or um with octavia with the octavia load balancers and so on so it's not completely reliable okay so we have to to think a bit about um putting things in production and there is also the issue of sensitive data so which might be solved with volume encryption these we add the problem because we were given um sensitive data from the australian business register and those data came with a strict terms of use and so we we had to to take to say to to have strict control of of those data to be to be compliant with these these terms of use and so we have not really found a solution for it maybe volume encryption will do and there may be other data sets that are sensitive in not sensitive in the sense that they those are high value data that data custodians could give us to only under strict really strict terms of use um and that's about it i guess questions last next slide thanks lucke we've got time for one or two very quick questions otherwise i could go into chat thank you not at this stage all right thank you actually i'll just have a quick question if i can yeah hi lucke thanks for that um and thanks for saying what works and what doesn't work um i suppose as well what's interesting for me is that um nectar kind of balances between supporting research projects for a period of time and imbalances these longer term platforms as we've heard mentioned from echo commons bccvl and obviously now orn as well so what do you think would assist in us um supporting those longer term platforms going forward look the what i like to what we like to most is reliability in terms of well okay understand that is not it has to do with open stack okay more than nectar itself but this is what we we really like because we were a long-running process project and we will run for for a few years more so reliability of the platform is really a paramount for us and clearly resources as well in terms of planning the availability of resources so be sure that they also will stay with us uh for for say for a few years at least so without change uh i mean i'm referring to the split to the mrc nectar split which goes does quite a bit of pain still causing a pain i understand is i don't know if it is probably beyond your control so i'm not uh but still i'm here for the world a better place no it's good it's good to have that feedback thank you luka and actually that sort of leaves us on to to this next session quite well carmel and and uh and luka because we're just going to go into breakout rooms now and that's exactly the kind of honest conversation that we like to have with you guys is you what is working but what isn't working and what do you think you're going to need it going into the future so i'll put you into breakout rooms in a second uh randomly assigned we've got um andy sam kirin and steve from the nectar cloud uh support guys to facilitate those rooms and they'll take you through that well i think in the interest of time what we might do guys is about 20 minutes um and come back just a little bit early um but you'll get a you'll get a five minute notification so i will do that now i'll see you back shortly so you'll see a message saying that you've been asked to go to a room have i got some people here who haven't gone to rooms yet so glenn did you join the meeting late oh no oh yes okay yeah i did there you go yeah i think everyone's now assigned hopefully that'll work brendan are you there