 this is going to be the format. We are going to talk a little bit about Arnet just so that you know who we are and why the heck we are playing in this space and we're going to remind you about the sensitive data challenge and particularly the specific pieces that Arnet is looking to solve and we'll then move on to give an introduction to our project and there'll be a demo of what we've done so far and we'll finish them up with the roadmap so that you actually know when you might be able to get using this service. So that's moving on to the first bit which is the Arnet background. So the vision that Arnet has is that of a globally networked data sharing ecosystem and that ecosystem should accelerate knowledge creation and innovation and we are actually a not-for-profit and we are owned by all of these good universities in Australia and CSRO. So we are essentially owned by the higher education and research sector and we are most famous I think for our super fast network that actually connects up all the universities with each other and enabling data transfers and so on and internet access and things like that. And that network actually connects Australia up with other national research and education networks around the world and so enables global connectivity. Entities that we provide services to are actually not restricted just to universities though so we connect up schools, TAFE's cultural organization, publicly funded research organization and health bodies and so on. And we also provide services that sit on top of that network and add value to it. Zoom is a very obvious one to highlight in this COVID era. I think we're all very familiar with that as you were saying earlier Nicola. Another service that I do want to focus on for a little bit and there are good reasons why is CloudStore and CloudStore was developed through our collaboration with CERN and CERN is the European Organization for Nuclear Research and it's one of the world's largest and you know it's a pretty respected centre for scientific research. And essentially CloudStore I think a few of you will be quite familiar with it but it enables collaborative file storage and sharing and it's got that sync and share capability so some people tend to use it as a backup for example but it's richer than that. It can actually share files in an encrypted manner so within CloudStore as you can see there there's this file sender capability. You can actually put a time box on access of files shared by a file sender and you can get some stats on the file transfers if that's useful to you. You can also use vouchers over here which actually enables you to give others the ability to share files via the secure mechanism even if they don't have a CloudStore account themselves. You can also do data analysis directly within CloudStore and that's through our Jupyter Labs integration which is delivered by what we call SWAN so our SWAN service which is and luckily it's written down because I always gobble this but service for web based analysis and again that's something that came about through our collaboration with CERN. You can also within your browser window also collaboratively edit files such as Word documents, PowerPoint slides, Excel spreadsheets in much the same way that you can with sort of Google Docs or Office 365. We also have a range of plugins that again enable you to do things with data without the hassle of having to download to view it so this one that I've got up here is actually a DICOM image and we have a DICOM viewer embedded in CloudStore that enables you to view DICOM images. This one is actually a COVID patient, the thorax of a COVID patient and we also have a protein viewer, a PDF viewer, an audio file viewer and so on all directly within CloudStore so you can kind of start to see that it's very much a research focused application. So you kind of might be wondering why I've spent some time talking to you about CloudStore and it is relevant so bear with me but I just want to touch on our strategic direction at RNET and these are strategic priorities but the ones that I want to actually call out are the pillar around investing in health and medical research infrastructure, the one around establishing a national collaborative research platform and the last one is developed cybersecurity capability services and infrastructure and as you can imagine sensitive data sits across all three of these strategic priorities at RNET so when we were approached by an increasing number of current CloudStore users and enterprise university users you know asking us if they could put sensitive data on our CloudStore service you know they were telling us that they couldn't collaborate on and manage their sensitive data so we actually decided to look into this and see if there was anything that you know could be done about it. So I'll now move on to the sensitive data challenge and I appreciate that this forum is certainly not unaware of this but first of all just so we're on the same page the way that we define sensitive data and I actually think that this definition came about from those involved in the formation of this community when it first came up at that E-Research Australasia conference many years ago but sensitive data is data relating to people, animal or plant species, it's data generated or used under restrictive commercial or government research funding agreement and it's any data likely to have any significant negative public or personal impact if it's released, lost or modified so it's a pretty broad definition and as as you all know in this community there's this gradient of sensitivity so it could range from fully open data at one end to fully restrictive data at the other where we've got human identifiable data and you could even add defence research data you know at the far right of that spectrum but one challenge is that there are different perceptions along this gradient so someone may believe that the location of an endangered species the eggs for example is far more sensitive than de-identified human data and some could view that their research data is more sensitive than that of a commercial entity and so on so when you combine this with the aspects around ethics and privacy it's really not a black and white space and it makes for a really challenging space but as I said I think I'm preaching to the choir but I just wanted to highlight the specific problem space that Arna is looking to address because we're an infrastructure company so you know addressing the challenges around ethics is definitely not one for us to start weighing into but we are looking around this this piece here so researchers and their institutions are struggling to find or provide appropriate services that enable the storage management analysis and controlled collaboration of sensitive data so you know it's a broad statement but different institutions are dealing with this differently but in many cases you know universities are struggling to provide a service be it because of cost or you know the requirement to support it or legislative requirements around it and that leaves researchers using insecure methods such as you know maybe email or sharing hard drives and so on and it also leaves institutions pretty blind about where their researchers sensitive data assets lie and and what the risk around those are and so this you know this this problem leaves the potential for massive reputational damage so that should something actually go wrong so for example if you you know leave your laptop on a train and so what is Arnet doing about it well we are using Cloud Store as a base model platform from which to build this given all those popular research focus capabilities that I spoke on at the beginning given that we already have those in existence we are going to be using those and build them out in a sensitive data capability and that capability will be designed to support the sensitive data life cycle for you know member universities health and medical institutions you know other other sensitive data research such as ecology and and so on and how we are doing it is in a phased project approach so phase one actually began last year when we engaged with some external consultants on a review of our current practices particularly in the information management space and we also carried out a massive consultation effort across the sector where we spoke to a huge number of existing Arnet service users and other organizations that deal with with sensitive data and we've also actually subsequently had a separate medical research institute engagement that's also been informative in many areas but during these engagements we gathered understanding about the pain points that people were struggling with and that enabled us to get a really good view as to the requirements for a solution we are now in phase two of the project which is seeing us mature our information and information security management practices in line with ISO 27001 and 2. ISO is a standard under the international organization for standardization which is a lot of standards I know but it's one of the standards that is widely recognized as being helpful when dealing with data of a sensitive nature and so we are working towards that and then the proof of concept is the other piece during phase two and the one that personally I find a lot more interesting and but it's the one that we are going to focus on today and so I will skip some bits there but this is what we're aiming to build in the proof of concept so if you look at step one we have an admin that creates an authorized user in the service and that researcher can then log on to the platform using multi-factor authentication and this gives them access to the platform that provides data analysis tools so things like swan that I mentioned before and access to a storage media option so that's something new that we'd be bringing in here so you could have something on disk for data that you need regular access and performance around and you could shunt it on to tape for cold storage or archive and all of the data on the platform is encrypted and the institution also has visibility of this which which gives them you know that view of their sensitive data holdings and it's important to know that that doesn't necessarily mean they have visibility of sensitive content and but now when a researcher indicates you know a requirement to share or collaborate on the data they can actually use the platform to initiate an approval process that happens at the institution and so that request is assessed by an individual or even a series of individuals if needs be who are appointed to this authorizer role and that authorizer can view audit trails relating to the data and see who the requested collaborators are and then you know make a decision whether to approve or reject that request if it's approved then that new researcher gets given access to the platform which they in turn access via multi-factor authentication and so on so that is what we are aiming to provide the sector with and we are doing this through a proof of concept project as I mentioned and we are blessed by being guided in this endeavor by our POP participants so these participants are incredibly knowledgeable and expert an expert group of people who from from these institutions here and they encompass different roles within the institutions so some are IT managers others are clinical trial coordinators researchers and I think we've got some data curators in there and research data management specialists and so on and all of them have been incredibly generous with their time and helping us actually you know make sure that we have a valuable insight as to what is needed for this service and make sure it's fit for purpose for all of those roles within an organization so I will hand over to Rob now my colleague he is our sensitive data software developer and he is going to take you through a demo of what we've done so far so Rob do you need me to stop sharing now yeah thanks Frank if you could I'll I'll actually share a video that we've been preparing for the e-research conference next week and that explains and runs through a lot of the features of the system so I'll share and play that now Hi my name is Rob and I'm going to show you the sensitive data class Rob I'll just interrupt you because we can't actually see your screen yet we can hear the sound but not your screen I'll reshare it so just while Rob is doing that tech shenanigans and the e-research Australasia conference I'm pretty sure you're all aware but that's next week and in this COVID time we all had to present or provide our presentations pre-recorded so Rob put in some effort doing that and we figure it's useful to be able to share it with you guys now so we still don't have a visual but we are maybe getting there we can see we can see stuff now but yes yes success all right cool here we go Hi my name is Rob and I'm going to show you the sensitive data cloud storage service that we're building at Arnett we are developing this proof of concept in the cloud services team at Arnett using an agile software process we are collaborating with a variety of external stakeholders from different institutions who have been great at giving us feedback to help guide and refine the product so that it will meet their needs security is an important part of managing sensitive data so all data will be stored on dedicated hardware within Australia with all data encrypted and backed up all activities including user logins and all file activity will be logged and able to be exported from the sensitive data service this service is designed to run within a web browser this is a version of chrome running full screen and as you can see there's only one action we can take here so let's log in and i'll show you what happens next so on this screen you have to choose your institution from the list for the purpose of this demo i'll choose the cloud store dev IDP so here i have to type in my account username and password so because this is the first time i've logged in on this account i'll have to set up my account for multi-factor authentication with a device so i'll click on start setup and you can see it's asking me to choose either a mobile phone or a tablet to bind to so i'll choose mobile phone and on this screen you can enter a phone number anywhere in the world and it will do a validation via sms and then that will be configured to either send a push notification or to send an sms pin code that you'll have to validate every time you log into the system so after you've completed the initial setup then you can continue the login and either send a push to your device or send a pin passcode i'll choose to send a push and you should see this message pop up on the device and when you choose approve it will continue so this is the home screen on the sensitive data service as you can see uh there's not much you can do yet you need to be able to create a new project so we'll talk more about roles in the system shortly but only certain people can create new projects in this case the user i'm logged in with can so let's go ahead and make one now so there may be a lot more options you can choose to customize your project in the future but this is all we have at the moment as you can see the project appears as a folder on your screen if we click on the project then we go into that folder and from here we can start uploading and adding files that we want to share with people or collaborate on so there are a few different ways to upload files into the project if you click on the plus new you can choose to upload a file or a folder here create a new folder or edit a new file directly i'm gonna go ahead and drag my files into this area which will upload them automatically so now you can see the files have been uploaded and depending on what type of file they are they'll either be an icon or a preview on the left hand side of the file name and over here on the right you can see some actions on the file you've got rename download or delete to remove the file from the service so now if we go back to the home screen if we want to look at the properties of the example project we've made you click there and we can see we can either add users or view collaboration requests i'll come back to that later for now i just want to add another user to the project so you can type in the user's name or email address and here we've got roles so we can choose a couple of different roles for this person we've got the authorizer who can add users directly or approve or reject collaboration requests i'll talk a bit more about that later then we've got collaborators who can access or change the project files and lastly we've got people that can only view the files and that includes accessing them downloading and previewing the files so i'll make test.net user five a collaborator and add them to the project so jumping ahead and looking at his test user five you can see i've got access to the example project if i click on the project properties you can see i don't have access to add users directly but i can create a collaborator request so i'll go and do that now so let's say i want to add test user four to the project and i can choose what role they can have they can't be an authorizer because i'm not an authorizer myself they can be a collaborator which is the same as me or they can be a viewer so i'll choose that one so optionally you can enter a reason why they should get access and you can specify an authorizer who you want to approve this okay now send the request and i'll go back to the admin five users screen and show you what they see so back at the admin five users screen if i click on the project properties and go to view requests you can see here there's the request that we created for test user four to become the viewer in this project i was requested by test user five and requested to me who is admin five and you can see here there is a reason with little i icon you can click this and then see a pop-up showing the reason that they've requested this for so from here you can do two things you can go across the right hand side and choose approve which will grant them access or reject which will uh get rid of the collaboration request and not approve them i will approve them because i want to demo what that looks like okay that's now being done and then i'll jump across to that new user and show you what they see so i've logged in as test user four who now has access to the project and if i click on the project properties you can see i am a viewer and if i click on the project folder you can see i have access to these files but under the actions i only have the ability to download the file not to rename or delete the file because i'm not a collaborator or an authorizer so i can download the file now in the future we will have plugins to support common files in this ui to allow collaboration and editing in real time and that wraps up the presentation for the sensitive data service at annett thanks for listening all right that's the end of it um i covered a lot of content do i have any questions on anything there there was a heck of a lot of chat actually that i was desperately trying to keep on top of um there might be some that i can call out that i didn't get to um if you want to stop sharing your screen rogue we might have a bit more um uh what do we call it space to see stuff um so phyto two i believe uh that was a question from lance that is uh some kind of encryption i am not the one to ask about that is gavin on the line he is gavin is that a question for you yes he is uh the two of a method so the uh so the question was let me get this right for you is there support for the phyto two industry standard sms two factor is known to have some challenges so actually so i am unfamiliar with what phyto yeah i'm i'm unfamiliar with that one as well so can we note that one and uh respond to it offline yep yep um yep sure we can and what else have i missed uh mike i appreciate the your your comment about how it'd be lovely to be able to you know create a user in your own systems and have that automatically carry through into the sensitive data service we have heard that before and that is something that we have got on our backlog um uh other questions patrick the multifactor you know if we were to rely on multifactor from another uh from a from a user's home institution i guess that that could be desirable from the researcher's perspective but it does add a vulnerability to the system in that it is relying on the approaches on the security of approaches of a another uh institution and so that's yeah we're perhaps not so keen on that one yeah it was considered yeah um does the upload process place them into a holding space that the project owner can approve for them to be available in the project so not at this time so the approval workflows that we have are around individuals so once that principal authorizer or project owner has actually allowed a researcher into that project space they can kind of do whatever they want um at this time so you have had that brought up a few times so it is something we definitely look at i think that's quite possible but we'll have that picture for upload but not for download so you have to consider um being sensitive database you want to be careful about who has access to the system that doesn't go through that 2fa process because that is an audit trail as well but it's less it's considered less significant uploading stuff into it than downloading but you could debate that as well thanks rob um we've got another question from mike here which gavin might be one to um back to you uh mike has asked about cloud stores reporting in that uh can we see versioning information across all storage contents rather than having to click for the properties on every single file so that's current cloud store is the arnett sensitive data service going to improve on that so is that like a summary report on on par versions anything at all really um because when we've got hundreds of thousands of files um and we don't know if we've got if like multiple versions of files right the only way we can find out is to click on every single one of them basically yeah take years um rob have you seen any variation on that in the new version of of own cloud because it's it's an own cloud limitation obviously and helps me working quite closely with the new version no not yet so i'm curious to know what the intent is behind providing that information what are you planning to use it for um this is something um you say um like when we've been working with media flux as well so that we can actually um identify when uploads have been made to the same document or files by different people and to know that we can retrieve older ones we've basically got no um viewing to the system that lets us know if that is uh if the the right stuff is there so that and you know even if we could if we can get a report or if we could you know programmatically retrieve all files before a certain time because we might know that everything after a certain date was corrupt or something like that um so there's a lot of scenarios where this this comes up where in other recovery and repository situations i mean it does have a process for recovering data um in the past if that's what you're talking about no this this is yeah this is more version version level um the closest i'd say that we have is um full audit logging so anytime if i was changed in the system and anytime anyone logs in or logs out there will be an audit trail that if you're an authorizer you'll have access to so you can definitely see that information but if you want to do kind of a bulk restore on files or on the project that's not something that's currently available but but also to note that we're particularly with this service that we're making we're working towards making that activity log um uh more accessible because currently you know you and and the individuals see their activity logs um so we're trying to you know make it that and um authorizer can see the audit log for a project and local admins can can see audit logs um and that they're accessible in a download or api format which they're currently not in cloud storage we know that there's also in terms of access log if you're using the rocket uploader then there's kind of a latency issue in terms of timing which means that the logs that we see already aren't terribly helpful in terms of timing because the upload time and the time that it appears in cloud store can be hours or days apart so like actually having some other tabled information in reports that we can use rather than looking at a single linear activity log would be really helpful. So well let's let's note that one as a as a feature request sure and Mike you had another question on our list uh can we get IP location information to prevent access from outside Australia or known public VPNs is that something that yeah does anybody in the Arnett team have an answer to that one I'm definitely not the tech guru. Yeah that that's for where we have clinical information that is not to be um downloaded or viewed by anyone from outside Australia. Yes I mean I I guess in some ways Mike the way that we're thinking of you know that the users of the system would respond to that is simply not give people access to it but are you thinking of the situation where you've got a you know a Australian researcher that happens to go off on holiday overseas god forbid when that might be but and then tries to you know sort of access their data overseas is that the situation? Well that's that's a very real thing it's like people who go to conferences on sabbaticals we have external collaborators where we don't aren't aware of their movements at all we expect with our governance for these things is that the chief investigator should be on top of this and reporting it back to our governance committee so that we can make appropriate changes but our other systems allow us to do basically IP filtering and so on to to stop that kind of access but again it's cloud story is kind of extra to our organization's ability to to view gate control all of that kind of stuff. Yes no understood and I think that's another one that we will will will add to the you know the backlog and I'll move on to a question from Patrick which has just shifted out of my view Patrick says where where does the source of collaborators come from and I guess the answer to that is anywhere in anything and that links to a question from Martin that says can international investigators be added as collaborators absolutely we need to be able to enable that we are having that feedback from our POP participants quite regularly the way that we're considering that at the moment though is to leave it to the institution to add those individuals to their you know to create an account for those individuals and I guess the risk of us doing it for them which is possible is that we don't necessarily know if that you know collaborator is no longer should no longer be able to access the space whereas the institution will know and there could be you know by being using us as a step removed it increases the chances of you know inappropriate access being available when the university wants to shut it off immediately so that is the answer to that one am I missing anything else so yes Kristen what what are the platforms predefined roles and how are the POP institutions looking to implement them so in terms of the platforms predefined roles we have a tenant admin that will be a sort of IT focused role at the institution that will be able to create the project spaces and we then have a principal authorizer that essentially in many in many instances will be you know the PI or the professor or who who runs a research group we will have collaborators that sit within that or researchers that sit within that project space and then collaborators that will join the project space and then external collaborators that may have access to upload into the project space but maybe not accessing the project space altogether and then there's an ARNET sort of IT role in the mix as well so those are those sorts of roles and I guess the question about how the POP institutions are looking to implement them Kristen is perhaps not best aimed at me we do have a few of them on the line but I certainly do not want to put them in the spotlight about what they're thinking about in this space and and have hrex hrex help me out Kristen I'm my acronym soup is failing me right now higher research ethics committees of course and not directly so via our POP participants would be the answer to that one okay Stephen do you seed the initial service for an institution do you seed this is having one admin that can create projects and assign owners to the project I think I answered that one yes so we have a tenant admin that creates the project space and assigns that principal authorizer or project owner to to it so yes so Patrick the service relies on authentication at a user's home institution so how is that different to relying on 2MFA good point and I will request my far more savvy colleagues to answer that one if possible that's already been answered earlier in the chat I hadn't realised okay good all right good because I was I was floundering on that one and then Mike you've written we're writing governance docs that state the CI must sponsor external collaborators without AAF access to get internal perfect so that is it sorry I'm not finished that sentence sorry I was really thinking ahead of my reading we are writing governance docs that state the CI must sponsor external collaborators without AAF access to get internal institutional logins that is exactly the mechanism we anticipate enabling external collaborators having access Stephen can you have a viewer but with no download capability yes I believe we can so yeah I mean that's that's an interesting one we've brought this up we can earn it and I guess the challenge there is if you're viewing it in a browser essentially the browser is downloading it so you can argue then persist the data we could disable the download option definitely that is something to consider but if if you're allowing someone to read it they could always screenshot it or do things like that so it depends on exactly what you want to prevent it can look at blocking download feature all right and I think we made it to the end of the chat so I will continue in the interest of time am I sharing oh yeah go I'm just in response to Mike's stuff um so you're talking about having to get institutional logins is AAF virtual home still okay so we're we're but yeah so we're basically pushing like our systems that they're going through for say for cloud storm or red cap they're going through AAF and so everyone who has got an institutional account is going through AAF anyway um so if you've got someone at a external institution who is not a subscriber to AAF then we're basically saying that you need to sponsor them for in our case a Macquarie login account um and then we will have a regular kind of audit process where we'll actually check the bona fides of external people on a you know regular basis the same applies if like I could set up an external collaborator you're using say the QCIF AAF virtual home um someone vets that that's a valid person and it's the correct email address and we can create them with an AAF identity it's just they don't have say a uq login or a Griffith login they've just got a an AAF one that we create for them um I'm not familiar with um that capacity but the reality is that for a lot of our external collaborators is that they may need to be accessing multiple systems or storage within the university and that we may as well just bring them in so that they can access everything under one umbrella account because otherwise if we've got to manage one person under multiple external identities it gets a bit tricky yeah just so Stephen Stephen and Micah I might um ask you know to take that one yeah sorry um we've got a few minutes left and there's a still a couple of bits that I was hoping to share so um but yes I appreciate that the um authentication piece is one that is forever interesting and there's lots of permutations that need to be sorted out um but uh just to get us back uh on track and just to summarize the service because the demo showed some aspects but not all but these are the key service features that we anticipate in a final service so um we've got controlled and auditable collaboration so those audit logs we'll see will show what's you know what's been done with what and who's done it and when and who approved it and so on the next four on the list there about collaborative editing data analysis plugins and and secure file transfers I touched on earlier when I talked about our current capability so I won't actually go into those now we do also have data management capabilities which I didn't touch on earlier but our collections plugin enables people to package up data alongside a metadata wrapper if you will and we're looking to strengthen this and and even automate some metadata extraction but the data on the platform will obviously be backed up and I also mentioned that there'd be an ability to shunt the data off onto cold storage when you know when the time is right and we all know on this call that there are often retention requirements around data so for example you know that 25 years for clinical trial data and and that this slower data storage option which comes at a much cheaper cost might be useful to run there and the aim for the service is that it will be available 24 hours a day seven days a week 365 days a year with the exception she adds of maintenance outages but those will be notified through our status dashboard which will tell you about any hazards alerts and maintenance notifications and this being a sensitive data service I also thought to highlight the security features of the service specifically so users will only be able to use the service through multi-factor authentication so that will give confidence that the users are who they say they are and I've already talked about collaboration workflows audit logs and even ISO certification so the next piece that I'll focus on is dedicated hardware and all of the infrastructure will be on its own private network and isolated from the rest of Arnet and the internet and it'll be in high-security tier three data centers that are either ISO 27001 or Ajo T4 certified and please don't ask me what Ajo T4 certification is and but all the data on the infrastructure will be sold on Arnet owned and operated hardware hosted within Australia so we do not host Australian research data or metadata outside of Australia we talked briefly on encryption because I got it confused but all the disk data will be encrypted at rest with each disk encrypted with its own unique key and data is transmitted over secured secured channels so encrypted and unauthenticated connections and the service will be operated across at this stage anyway two geographically distributed sites within Australia but we've had the potential to expand this as the service grows and those sites will be connected to each other by dedicated 100 gigabits per second or faster private network on the Arnet 4 network and so that enables the data to be replicated across those sites so that is the security features on the gosh I'm seeing a lot of chat I cannot keep up sorry I'll just finish and we can touch on it at the end hopefully but this is the sensitive data roadmap where we're at now so we are soon moving into our pilot stage which is going to see a limited number of institutions take on the service and actually adopt it for their sensitive data purposes so if you have any interest in that at all please contact me you can email me at that address and following a successful pilot we'll be moving to make this available as a service to all institutions that have some sensitive data care in Australia sometime in 2021 and I definitely do not want to miss thanking those that are making this possible so on the left we have all alphabetical you'll note we have the list of POP participants the experts from our collaborating institutions and on the right we have our Arnet gurus that are helping to develop this and Rob's been leading that that charge from the development perspective I won't go through the names but without them we wouldn't actually be able to be where we are today so I think yes that's the last one so are there any questions and do I need to go through the chat can somebody help me catch up on the chat perhaps there's a question about the cost model from Mike Williams in development would be the answer to that Mike we're working on it so Arnet but I'll just I will add that Arnet's not for profit so you know essentially whatever it costs us is is pretty much what you guys receive we're not out to rape and pillage the higher education community did I miss anything else Rob yeah Steven bird asked about other data analysis options so plugins for nectar or HPC or connections to a good question and Gavin off mute sorry momentarily distracted what was the question let me answer it then so Steven was asking about other data analysis options such as nectar and HPC and I guess one thing I can mention is that we are looking at well we have a no is that confusing things so we have a cloud store node at NCI but our sensitive data service is going to be quite a separate beast yes currently they would be yes we are also looking about potential linkages to paulsey and that might be an easier easier one to tackle at this time but Gavin I might leave that to you to yeah um and also there's like you know we're looking at what we can do with cloud store first so um uh because then obviously we have to seriously consider the security model for transposing that to sensitive data for cloud store as Frankie said um we're still working on the interconnects for NCI and paulsey um as well as we're very interested in a model of being able to deploy swan itself out into either the nectar cloud or into HPC apart from that yep the um see you can mount a cloud store collection on to an active em I'm presuming that script wouldn't connect to this service at this point not at not at this point no because effectively like you know um I guess it's it's equivalent to being able to download into an unsecured environment so sure um while you're off me Gavin there was a question from Steven bird on limits on data sizes I'm not sure if there's a specific limit we impose there is not a specific limit that we impose it'll come down to the uh pricing model and and what the institutions would be able to purchase sorry do you mean Steven data sizes is in uploading and downloading or just sort of allocations of project spaces actually both uh Frankie in terms of you know if I've got a four gig file am I hitting HDTBS limits in terms of the the upload um or do I need to bring it in in a different mechanisms but also you know if I've got hundreds of terabytes of sensitive data is at an issue if you've purchased hundreds of terabytes of worth having a project space um then then then absolutely um uh Mike Michael what's what's our sort of HDTBS limits for file size at the moment oh you could easily do a gig yeah I'm just thinking some of the you know medical imaging and stuff like that um well that that's that's you know that's browser based um uh drag and drop so obviously um you'll be able to use uh sync clients and um tools like our client for upload as well to overcome those limitations okay thanks and Mike I've spotted your question about voice transcription services the interest that you've got in that no uh that that will not be offered by the Rnet zoom service in um uh and um it's very uh difficult for Rnet to to keep up with with the zoom offering so we uh I guess um encouraging those who want to access those services to to look at moving to the um uh zoom cloud as part of their their Rnet subscription uh so I'd refer you to the the zoom team to discuss that one yeah we can't yeah it's just that all this stuff's overseas so I understand that um it's uh if it's if it's uh yeah maybe maybe we'll add that one to the feature list um but it's sort of opening the gates on on a whole bunch of zoom features that is um very hard for a small team like us to keep up with yes so Gavin you mentioned you know the sort of reliance on zoom for that now Mike you might be somewhat comforted to know that as you've seen we have a strategic priority to support the health and medical sector whether that's through a zoom service on voice transcription or something else is you know open to to consideration I guess you know as we sort of start to strengthen our infrastructure offerings in that health and medical space maybe that is one if it's uh you know one that is bugging and and and challenging and uh inhibitory to everybody maybe that's one that could be one of the things that we pick off the um to-do list so um I will wrap up there I think uh Katie yes I think uh this yes I can make the slides available to Nicola who will share them with the rest of the uh community of practice I think Nicola best hand over to you with just a couple minutes remaining uh awesome thank you um thank you so much that was what you can see from the chat that was very useful so I just have a couple of details to share so um for our next meeting we'll tentatively be on Wednesday the 11th of November it'll be one of our discipline specific topics probably health and medical but as soon as I have more details nailed down I'll both share that in the mailing list and also update the website excitingly we also have a birds of a feather session coming up next week at e-research australia so if you'll be attending then um please join us there on Thursday and if you're not already a member of the mailing list then please do join up there's a form on our google site and apart from that I think we are done so I will be um putting the slides and also documenting the questions and answers into the communal document and apart from that thank you very much everyone very much for your time and I'll see you next time thanks Nicola I'll see you