 G'day viewers, my name is Oren Thomas. I'm a Principal Cloud Hybrid Cloud Advocate at Microsoft, and joining me today is. Hi everyone, I'm Sonia Gov. I do Cloud stuff too. Today, we are going to talk about, actually that's not what we're going to talk about, is it? That says Hybrid Identity. We're actually talking about file service. Finally, we get the help. While you find the right slides, welcome to Lune Live. This is your chance to follow along on a Lune module on Microsoft Lune with us as your guide. If you want that blend of following along the bite-sized pieces at your own pace, but you really like the comfort of being able to ask and instruct to questions, think of us as your instructors. We will be going along explaining some of the tricky parts here to answer any questions you have through them in the chat. But as Oren said, we are going to be talking about Azure Files today. And we have Christiane Vergara. I hope I pronounced that right, Christiane. I didn't actually check with you after last week, whether or not I completely stuffed that up. And Christiane's going to be in the chat and throwing any curly questions that you've got directly at us. Okay, so this particular module, talking about hybrid file server infrastructure, about your files being on-prem and in the cloud and everywhere in between, actually sits in the AZ800 exam in the Manage Storage and File Services functional group of that exam. So if you're actually interested in this exam, guess what? We're going to be announcing the beta for this, probably sometime next week. So keep your eyes on the Microsoft Learn blog because you will find out how you can actually do the beta exam for the AZ800 and the AZ801 exams. And that'll allow you to do it before everybody else and have a go at all of these questions and maybe get this certification before anybody else does. Okay, so what we're going to do now is we are going to get straight into our module, implement a hybrid file server infrastructure. Okay, Sonja, how would you introduce this topic? The thing I like about this particular topic and the fact that it's in those exams is it's a topic that is very familiar to most on-premises IT pros. If you think about the traditional workloads that a Windows server might do, it's serving Active Directory so people can log in. It might've been producing email if you've been running Exchange on it for emails. And the next most obvious workload before you get into applications is being a file storage system and a file share. So all of our users that have mapped network drive and they're saving files into a place where their team can share them. This is like file storage 101. We've been doing it forever as IT pros for our organizations. So this particular module is talking about how you can migrate that capability from your on-premises file server to Azure but also maintain that functionality for our end users. So we still want people to be able to access those files. And we're not talking about turning off that file server in some instances. So we're having this hybrid replication between our on-prem and our cloud environments. You're not infuriating. Orin, you like that point, don't you? This is a real essence of hybrid here. Yeah, it is. Look, this is the gateway drug to Azure. I said this is my absolute favorite hybrid technology. And it's the one I like to talk about when I'm talking about what the value is of the cloud. File servers are the most common server workload in existence. If you look at everything that servers do all around the world, even though you've got all of these web servers, it turns out that there's a lot more file servers than there are web servers. And one of my first jobs as an IT pro in the 1990s was dealing with file services. These were Banyan file servers, which gives you this shirt. And the fact that I've just said Banyan should give you an idea about the age. Don't let the freckles fool you, very old man. And that my job was mucking out a file server. Now, what's mucking out a file server? Well, what you know about file servers is that people go and store files on file servers. And people create files. And people are creating files all the time. But there's another part of that story is that once a person creates a file and they work on it and they do things, at a certain point in time, they stop using that file. That file sits there and it doesn't do anything. It just sits up there on storage. Now, you end up with a great collection of these files that are no longer being used. And there was sort of some research done about 15, 20 years ago that found that if a file isn't touched for 90 days, the probability that it'll ever be opened again is vanishingly small. So what would happen is, especially in the old days when we were talking about megabytes instead of gigabytes or certainly what we've got file servers now, you'd end up with file shares getting completely clogged with files. And your job as the file server administrator was to muck out the file server where you'd take or you would delete all of the files that you could so that there was gonna be more space so that for the next month or two, the people in the department you worked in could go and write new files to the file share. But the challenge with that was that sometimes you'd go and remove or delete a file that someone would actually need later on. And we've all who've been in IT Pro for a while had someone come to us and say, Sonia, there used to be an Excel spreadsheet in this folder that had the company accounts from 2013. I can't find it. It's always an Excel spreadsheet, right? Who runs the world Excel? Like Excel spreadsheets run the world, they literally do. And I would like to know, if you're joining us live and you're in the chat, have you ever needed to go through your files to reduce the amount of files that are on your server because you've got that lovely alert to say that you're about to run out of disk space? I think mucking out is a very Australian term, but it's definitely a very universal problem that we see across the globe. So I don't think you can escape being an IT Pro without at some stage having to do a cleanup because on-prem disk is a finite resource. And usually the way I would find out that a file server needed mucking out it might be an Australian term. Basically, if you've got animals in a pen, they tend to fill it up with a certain type of biological matter. How to put? Yeah, and at some point you've got to pick up a shovel and deal with it. And that's sort of what I'm saying my dear here. Okay, so let's get into, we'll talk about what technologies assist you with this in this module. So what we're going to do, the learning objectives for this module are that we're going to describe as your file services. We're going to configure as your file services. We'll configure connectivity to as your file services. We'll describe as your file sync, implement as your file sync, deploy as your file sync. We'll talk about cloud tiering and cloud tiering is the real hint about the automatic shoveling of stuff out of the animal pen. And then we'll also talk about one of our, if I ever am presenting on Windows Server, I say, who here loves DFS, distributed file system? And no one will ever put up their hand because people tolerate DFS, but I don't think anybody really loves them. Sonya, do you love DFS? Look, not particularly, I think that we love the capability that it gives our business users. But yeah, it's a bit of a nightmare to manage, let's be honest. Yes, it's very much what we might call a legacy technology, which means it was really, really good when it was introduced and it offered a whole lot of promise, but it kind of comes with its own challenges, let's just say that and move on. Okay, Sonya, would you like to start describing how is your file services works? Yeah, absolutely. So Azure File Services is a component of Azure Storage. So with Azure Storage, we've got four different types of storage services. And we're not talking about the difference between a traditional hard disk or a solid state drive here. So we're talking about things like blob storage, table storage, queues, and then files. And so the storage service that you use is going to depend on what type of data you're wanting to put in the cloud. So with blobs, we've got unstructured files. We don't need to have a particular way that the file is written or a way that the data is put. We can basically chuck anything that we like into a blob, including things like the disks that run our virtual machines. So with table storage, we've got non-relational, semi-structured content, but we've still got them in rows of data. So we've got information in here in a table structure that our developers might use as the backend data store for some of their applications. With queues, this is where we've got temporary storage. And I don't know about you, but when I think of a queue, I always go back to emails because I remember troubleshooting email servers and going in and seeing that in the queue for processing, there was a significant backlog of emails that were sitting in the queues. So they hadn't yet been either sent off to the recipient server or put into the mailbox of the people who were receiving it. And so that's what queue storage does. Queue is this sort of temporary place where you wanna hold something that's going to be processed and then it's gonna move on. And then finally, we do have a specific type of storage, which is optimized for files. So we have a files storage service for unstructured data where we're talking about it. It could be an MP4 file. It could be a spreadsheet. It doesn't matter what it is, but it is a file format. But the interesting thing about this is this locking mechanism. And so if you ever came across an issue where you would open a file on an on-premises shared drive to get an error that it's locked because somebody else is using it, our file storage means that we can do file sharing and it can manage this multi-axis into these files slightly better. Okay, and so when we're talking about what is your files is, it's an Azure service that provides a functionality of a file share. So it's sort of like a file share in the cloud or a mapped network drive, except for the endpoint of the map network drive just happens to be in your cloud provider rather than over a VPN connection to your on-premises network or however you were doing that in the past. Now, the advantage of Azure file shares is traditionally if you're setting up, let's say a remote file share that people are connecting to over the VPN, you still have to go and deploy the file server. And that means you need to manage the file server and the operating system of the file server and make sure that file server's patched and you need to worry about all the storage and the disks and the redundancy and the backup and the blah, blah, blah, blah, blah. I mean, the backup you still have to worry about but all of that stuff. Well, with Azure files, you've got a serverless file share where you basically go and provision it and you've got the way of mapping it and then it turns up as a network drive and then you're not worried about how much storage is being there because it's the cloud and we've got all of this really, really big data centers and we've got lots and lots and lots of disks in them and that means that functionally you've got unlimited storage. Functionally, don't go and test this by basically trying to generate as much of the library of the world and then just dumping them onto a share to go, huh, they said unlimited storage but I'll show you what infinite means. So you've got data redundancy. That is because you are not worrying about managing the file servers but we're worrying about managing the storage. What it means is that the data is stored in a redundant manner. That is if the actual physical server that's hosting one copy of the data gets blown up by an alien death ray, there are other servers in that data center that will take over and seamlessly also host copies of that. That it provides encryption, the data is encrypted so that if some of the aliens with the death ray wandered into the data center and actually started pulling out hard drives and trying plugging it into their laptops they wouldn't be able to actually mountain read the data because everything's encrypted. Access from anywhere means that you can connect to as your file shares from anywhere in the world. Now, you might be sitting there going, well, that doesn't sound like it's the most secure thing in the world, Lauren. Well, you're protecting the access to that file share. You're just not like opening it up and going, hey, I'm gonna just go to browse to slash slash as US slash files slash orange. Instead, I'd actually have to go and authenticate before it would allow me to make that connection and that that connection itself would be encrypted. Use the standard protocols. So you can access it using SMB. You can access it using NFS and you can access it using HTTP. You can integrate it with your existing environment, which means that you can use Active Directory permissions and Active Directory domain services positions to actually restrict access to files. It's got previous versions or supports previous versions. Previous versions is where you set something up on a file share and basically again, when previous versions was first implemented, it actually saved administrators a lot of time. It saved administrators a lot of time. Why, Sonia? Because when people went and had issues with their files or managed to delete stuff by mistake, they could just right click on the file and go back to a previous version of the document before it had their screw up in it. And it meant they didn't have to call the help desk and get the file restored from a backup. Yep, so previous versions, yeah, Sonia said. It basically just created time snapshots of files. So if someone screwed something up or if there'd been file corruption, you could just right click and go show me the previous versions and you might see 10 previous versions of the files where the snapshot had been taken every three hours. You might lose three hours of work, but losing three hours of work's a lot better than losing 300. And then you can optionally integrate them with on-prem file services. And we'll be talking a bit more about this later on. There's a couple of points in here because sometimes I have conversations with people about when would I use Azure files versus when would I use SharePoint for storing my files or OneDrive for business, right? Because OneDrive for business is quite commonly used for people's saving files as a way of saving them to the cloud. But I think aside from the fact that it feels more like that Mac network drive, the G drive that I've always had or the S drive I've always had to access the information from the marketing department or the finance team or whoever it is, those little key points about the support for the legacy protocols like Kerberos and the Active Directory integration and the way that we handle permissions on that, that's really important. That sort of key stuff that you don't get when you're using SharePoint Online or OneDrive for business. I think that there's also, and there's something that you and I have talked about quite a length is that the conceptual schema we have around finding files is actually quite challenging. And that even though it does seem quite archaic to talk about the Mac network share, I can describe to you, and I did, when I just said, you know, slash slash as your slash or and blah, blah, you instantly intuitively understood where to find that file. One of the challenges with SharePoint, one of the challenges with a lot of these cloud syncing technologies is how do I find something or describe where something is to someone without hunting through my email and sharing a long complicated link? Where is the discoverability of those files? And one of the things I've always found very interesting is that even though we can come up with more sophisticated technical solutions to a problem, sometimes that those technical solutions are far more conceptually intensive, that they actually don't work. A great example of this might even be IPv4 versus IPv6. People just conceptually can understand an IPv4 address. They can look at it and go, I kind of understand that because it kind of looks like a phone number. And then they look at an IPv6 address and they go, well, it looks like a GWID and there's no way that I can remember that. And that even though IPv6 solves a whole lot of problems, the human brain's not able to deal with that. And I think that, you know, again, if we're talking about one drive for business, we don't think about navigating a one drive for business in a shared environment. It's very good if you're dealing with individual stuff, but there's almost a, if I've got to collaborate with people, I need, it's like a library is organized in a certain way and we organize libraries using the Dewey Decimal System. And that means that you understand where things are in each library in the world. And we ultimately put books on shelves. And to a certain extent, the traditional file share is a shelf and books on a shelf. Whereas it's almost as though with the newer technologies, it becomes a bit more challenging to figure things out and they're very reliant on search. And search is great if you know what you're searching for. And it's sort of like, again, that old joke about a dictionary, the definition of a dullard is someone who goes to the dictionary and only looks for the word that they're interested in. Because when you're in the dictionary, you actually open it up and then you go, you're looking around and you're suddenly looking at other entries that are in proximity there. And suddenly you might find something just by looking that you wouldn't have found otherwise. Anyway, Sonja, when you're deploying as your files, you can use a variety of different storage account types. Can you talk me through the options that we've got here? Yeah, and look, you'll come across these letters, these three-letter acronyms when you're pricing up storage in the Azure pricing calculator, for example. And we also refer to these terms when we're talking about things like virtual machines. So they're terms that if you get familiar with, they'll be useful for other Azure services that you use. But it basically comes down to the redundancy of the storage and how many copies are stored of the information that we're putting into the cloud and where those copies are kept. So with a locally redundant storage, our data updates replicate across three copies within a single facility in a single region. So we've got protection against any one particular piece of server hardware failing. But if that single facility inside this region fails, then we don't have any other copies of that data anywhere else. Now, that might sound like it's a bad thing, but there are use cases when you would use this. If you need to make sure that that data is only restricted to that one particular location for compliance reasons, you might use that. Or if it's data that you don't particularly care if you're going to lose. So it might just be test or sample data that you can easily recreate. You might not need to spend that extra on getting a more redundant storage option to protect that data, because it's not the kind of data that really needs protecting. So that's kind of our first one. So other storage options, if we bring that screen back up again, great. Include things like geo redundant storage. So with geo redundant storage, we've got information staying within the same region to start with. Sorry, I missed zone redundant storage. So with zone redundant storage, we've got three copies in separate data centers in one or two Azure regions. But if we have an entire region go down, then we don't have any redundancy to fail over outside of that region. When we're talking geo redundant, we are taking it to the next step. So we've got this data synchronization within different data centers within the same region. But then we're actually replicating this information to a secondary region. And Microsoft defines the pairing between the different regions to ensure that the data is in the same geographical area. So I might put information in, these are just made up examples. I might put information in the Australia East region and its pair might be the Australia Southeast region. So I'm automatically getting storage that is replicating from the Australia East to the Australian Southeast. If I lose the Australian East region, I've still got my automatic redundant copy in the Southeast region. But it again, because that is a geographical boundary of Australia, my data isn't leaving my country. So if that is a requirement that you have for compliance reasons, that's an option. And the next thing that we've got is read access geographically redundant storage. So with read access geographically redundant storage, the information is going to synchronize across two regions. And remember, we've already got three copies per region, but the copies in the secondary region are readable. And when I first read this in the learn module, I went, of course they'd have to be readable. Like if it's a second copy and it's not readable, how would it even work? But the subtlety in here is the fact that in an active replication mode with those other options where we've got copies of data in other places, that data in essence stays silent and you're not able to access it until the primary area where you're storing your files has a failure and then those other copies come to light. With this other option though, where those other copies are readable, what that means is that you can actually have an active read-only version of your data that is being kept up to date from your master site. And that could be useful for people in other parts of the world to access or other locations. You might point another application at that data if it only needs read-only access to that data. So, and it also means a faster time to recover. And so if you have an application that is built that uses this data store, if there's an issue with the data storage in the primary region, you may be able to give your business functionality to use that application in read-only mode because it's accessing that other copy of your storage until your primary area comes back online. So it's worth knowing the different capabilities that you have for storage redundancy and know that the more complicated the capability, obviously the price is going to be different, but it's about picking and choosing the right level of redundancy of that data based on the use case for the information that you've got. Now, it's important to recognize and obviously you will if you think about this for a moment that there is a difference between the data being redundant and the data being backed up. So this will protect you against failures of the storage medium. I mean, this is sort of like raid with data centers if we're going to be particularly blunt about it. But what you need to do is you still need to make sure that you're backing this data up because if there's, let's say that you introduce corruption into a file or file corruption and you don't pick it up, it could be that all previous versions of that file are somehow corrupted. And then when you go to open it, you're like, oh, this is no good. So then you need to go to your backup. So you will still need to maintain your backups. All we're doing, this is an availability thing rather than a data protection thing. So just remember, and if someone came and deleted it or someone accidentally went and did that, that's not going to go unprotected. So that's a completely something different. Okay, so we support two different types of storage tiers, premium and standard. Now, the easiest way to think about this is premium is solid state drives or NVME. They're only available in the file storage type of storage account. They provide really high performance and low latency and they're only available for locally redundant storage. So when you choose premium, you're going to be limited in that sort of availability option. Whereas standard is, we're going to say it's spinning rust or traditional magnetic media. I prefer spinning rust because it makes it sound a lot cooler and would then call it using it. Have you ever called that, that's cool. Yeah, well, you know, it is what it is, right? And then you can go and use that for everything. Now, why would you want to use premium? Well, here's an example. If you've got virtual machines that are all running in the cloud and they need to, let's say you want to go and put a database on a disk in the cloud or you want to use, put a database on an Azure file share in the cloud and use that for sort of shared storage in a clustered environment or something like that. You want to have premium because you don't want your latency or you don't want your bottleneck to actually be the storage in the cloud. Now, pretty much the way that most of us use file servers, it won't be that if you're connecting to it from, let's say connecting from Melbourne to Brisbane or from Brisbane to New York, that you're going to be worried about premium storage because your bottleneck's not going to be the right speed. The bottleneck's going to be the connection from A to B. So keep that in mind, whether or not you choose premium or standard is really going to depend on how close the workload is to where the storage is. If they're basically in the same data center and that's pretty much only going to be where you've got VMs that need to access this file share and that's actually a really good utilization of them for VMs in running in Azure as VMs, then premium is absolutely something that you should think about. But other than that, don't be like my son who sees the word premium and thinks I need that because I need premium everything. When you don't, you'll be completely fine with the standard, you know, red spot special option. Okay, so let's talk about Sonia, some common uses of Azure files. Absolutely. And so the first thing that we talk about here is replacing or supplementing on-premises file servers. So where we have servers that are aging, ones that need replacing Azure files can replace or give us extra capabilities and help boost the different storage areas that we've got. And that includes things like some of the network attach storage devices that we might have. So a lot of those are sort of getting on in years in terms of technology. And NAS devices can be, you know, particularly expensive to replace considering how much this technology has improved these days. So you might look at using Azure files for that kind of area to replace Bose workloads. That's a good start. We've also got lift and shift. And so we talk about using it as a temporary place to put files for applications that expect a file share to be there. So maybe we're doing that as a short-term thing until that application gets refactored. Backups and disaster recovery is interesting about using Azure file shares and storage for our backups or as a disaster recovery scenario. So you can actually use Azure files for sharing copies of your backup files, for example. And then Azure file sync, we get a little bit more in detail on this module which is really cool about it. You're replicating files from Windows Server into Azure files, caching data at the place where it's used and Azure file sync. I'm gonna let you take over and talk about this next part because I think this is sort of one of the shining lights of capabilities. It's easier to imagine it being extra file storage in the cloud that is more compatible with some of my older applications. But it really shines when we get to talk about Azure file sync, doesn't it, Aaron? Absolutely. So Azure file sync, rather than sort of being a replacement technology where you're taking your existing file server and you're replacing it with a new file server that just happens to be running in the cloud. What happens with Azure file sync is it transparently plugs in to what you've already got. So if everybody's used to going to slash slash arts slash philosophy slash philosophy 101 to go and search things. That's because there's a philosophy files or a share, philosophy file share that exists on that file server. And one of the challenges with any sort of migration of technology is how do I get people to actually use the new thing rather than the old thing? Well, it's even better if you can do like you see in certain areas of town or in your city that have got old buildings where what they do is they keep the old front end of the building and it looks like the classic 1870s front of the building and then they completely modernize the back end. And that's sort of at a very high level, one of the things that Azure file sync does because what it does is instead of all your files being kept on the file server, you sort of plug in as you are at the file server level so that the front end still looks like a file server but the back end is basically then taking those files and replicating them up to the cloud and then replicating them to other endpoints. But importantly, from the end user's perspective, nothing has changed. They're still accessing files in exactly the same way. So as we go through this module, we'll be talking about this a lot more but this is my favorite sort of hybrid technology because you're not asking a user to change their conceptual schema about how they're going to interact with something. You're not saying we'll have to go through and retrain. What you're basically doing is you're saying use the same API, use the same way of doing it as you were doing it before and what we're doing is we're just making it more efficient on the backside and you're not worrying about it, you're not seeing it, we're just making it better for you. And I like the fact that this also applies to Windows servers that might themselves be VMs in the cloud. I mean, we talk about disk being finite in an on-premises server and Azure file sync is certainly helpful here but when you configure a virtual machine in Azure running Windows Server, you specify how much disk you're gonna allocate to their VM as well. So from that perspective, from the sizing your virtual machine, your disk is also a finite resource unless you go through the hassle of resizing it to be bigger. So again, Azure file sync with the things that you were talking about, we'll get into with the way that it manages files and disk sizes. You can use that for VMs that you've got in the cloud as well. Now, another part that's really important to sort of understand with this is this idea that these are complimentary rather than exclusionary technologies. That is that you can have as your files and as your file sync working together. So you can literally have your on-prem file servers as frontends for an Azure file share but that you can also have an Azure file share directly accessible to clients that wanna connect to it directly over the internet. So it might be that in big branch offices, you've got an on-prem file server and everybody who's working in that office is interacting directly with that file server. But if you've got someone remote, you don't go and put a file server in for them. You just point them at the Azure file share at the end and they're all accessing the same data is just they're hitting different tentacles of the octopus. So there are three different authentication methods that you can configure for the Azure files themselves. Sonia, do you wanna walk through this? Sure, so at the start, we've got identity-based authentication over SMB. So the same sign-on experience as you would get when you were signing and accessing files on an on-premises file share. The cool thing about it is it does support that traditional Kerberos authentication and the user identities are either an Azure Active Directory or they're in your traditional Active Directory domain services. So that's just your standard username and password that you might be used to. Next thing is an access key. So access keys have been around a long time and a storage account in Azure actually does have two access keys that can be used when making these requests to the storage accounts. But the challenge with those that they provide full access to the Azure files. And so we don't have any way of producing an access key that has a lower level of access or only has access to some files or not others or read only access to these ones with full control access to these ones. So the access key, the level of control it has is full access to everything that's sitting in that storage account. So you've got to be careful where you use those. The other option is a shared access signature. So this is a dynamically generated URI and a URI are Uniform Resource Identifier. It's kind of like a URL for a website or a GUID, a global unique ID. So this Uniform Resource Identifier it's based on your storage account access key and gives us access to Azure files. But it does mean that we can put some restrictions on these shared access signatures. So we can set the permissions are allowed. We can put start times and expiry times on when this has worked. We can restrict what IP addresses are allowed to access it using the SAS and what protocols are allowed to use as well. So again, something probably more that your developers are going to use but worth knowing that that is a good way for applications to get a more granular level of access to data, including where that data is being accessed from. And the reason that all of these different options are going to be provided is that there are a lot of applications that are around that have been around for a long time. And there's a lot of ways, creative ways that developers have had to actually write and store data. Some go and access it or put it on a database. Some just go and use it and drop data on a file share. And they might have used an Active Directory account to do it. Or they might have used another method to do it. So what it's important to do is it's important to be able to do it the new way as well as the old way, just so that you're not ending up with this blocker where you can't go and use a technology because the new technology only supports the new way of doing things. Because ultimately one of the things that we've got further and further and further, we talk about technical debt. I'm not sure that that's necessarily the right way to think about it, but we've got applications that it's gonna cost too much to go and rewrite, that we are going to keep in production for a long time that do things a certain way. But this allows us to, as I said, sort of, put a new back end while keeping the original front end. In the original front end, it looks like it's a building from 1870. The back end, it looks like it's a building from 2020. So that's really where a lot of these technologies go. It's about integrating in a way that's conversant with the existing technology rather than requiring you to pick up something and completely do it a new way. Because you know that if you completely have to do it a new way in 2021, guess what you're gonna have to do in 2026? You're gonna have to do it the new new way. And then in 2030, when you're gonna have to do it the new new new way. So we are reaching a level of maturity where we're understanding, oh, actually, you know what? We can use the old way as a sort of an API to do it the new way. And it's a bit messy, but it does it. Okay, so in terms of using their identity-based authentication, you can use identity-based authentication on as your storage accounts. However, before you do this, you must first set up a domain environment. Well, one would hope that you had a domain environment. But Sonia and I actually talked a little bit about this in our last Learn My module, which was eminently entertaining. And if you haven't watched it, I suggest you go back and watch that one. Sonia, do you wanna talk a bit more about this? Oh, the previous module, that was the hybrid identity one year, that was a fun session. Look, like you said, you need a domain. Cause the domain in essence is where the identities are stored that are gonna get access into those Azure files. So you might already have it set up on-prem with Active Directory, or you might be using Azure Active Directory. And it's not really much more complicated than that, except it is an or scenario, not an and scenario. So we are looking at having one of these identity providers being the way that those Azure files shares are accessed in the way that the credentials are done. So when a user tries to access the data in Azure files, the request is sent through to the identity provider, whether it's Active Directory domain services or Azure Active Directory domain services, they go and do the authentication and prove that it is a valid user. And then Azure AD returns a Kerberos token, which then sends the request, including that token to the Azure file share to say, yes, Azure file share, we are good to go. So the only difference being that which of those identity providers is gonna give the token that is then gonna be handed over to Azure file share, which is like your golden ticket to say, yes, you can go ahead and open those files. Don't say golden ticket with Kerberos, you'll give people connection. Okay, so in terms of configuring your Azure file share permissions, well, you can, if you've got identity-based authentication, you can use role-based access control within Azure to control access to Azure file shares. You've got the storage file data, SMB share contributor role, and users in this role have read, write, and delete access to file shares over SMB. The storage file data SMB share elevated contributor. Now, that doesn't mean that they're walking around on platform shoes. What it does mean is that these users have read, write, delete, and modify NTFS permission access in storage file shares over SMB. So they've got full control permissions over the Azure file share. And then you've got the storage file data SMB share reader. And that means that they've got read access only. You can also go and create custom roles, but as is always a recommendation, really figure out if what you can be done or what you need to do can be done with the default roles before you start spinning up your own custom roles, because all you'll do is create confusion in the next person that comes along who has no idea what your custom roles are. So just elevated contributor is an interesting role name that we don't often see in Azure. And it's because when you think about a true contributor role in Azure, it's normally someone who has access to modify, delete, whatever, with a resource, but they don't have permission to change the permissions to that resource. So they can't go along and grant somebody else contributed access to the resource because they can't control the security at that level. But because we're talking about files here and the files actually have permissions on them, which of these NTFS permissions at a file level on whether or not you've got access to read, write, delete the particular file, this elevated contributor means that you can have the authority to maintain the permissions on the file and change those file permissions, but that doesn't give you owner level from a proper pure RBAC perspective. You are not able to then go and add somebody else as an owner to the storage account, for example. So it's giving you access to permissions that are permissions related to the file, not the ability to change permissions related to the Azure resource, which is the storage account. And it's kind of funny with NTFS permissions because I've always felt that share permissions and NTFS permissions is something that we've been learning about the 25 years that are still something that we don't entirely understand. Sort of a bit like IPv6, but that it is something that is a bit of a challenge. I'm much more in favor of sort of when you're actually dealing with files and folders using some sort of identity sort of, not identity protection, that's the wrong technology. Information protection solution where you're actually setting the permissions within the file itself because at least then if someone's got access to it, they've still got to authenticate inside the file to actually get access to something because we've all been in the situation where we've been surfing a file share and we've got access to something that we shouldn't have access to because no one's got any idea of how to go and configure the permissions. But speaking of things that you shouldn't have access to, all of the data that's stored in and as your storage account, which includes the data on as your file shares is encrypted at rest. And when we're talking about encrypted at rest, it means that when it's sitting on the hard disk, it's actually encrypted on the hard disk so that if someone mission impossibles their way into, and as you are data center, like Thomas Merrill mission impossibles his way into that fridge and steals as Adora's cakes, basically if they pull the hard drive, they're not going to be able to plug it into something else and access the data because it'll all be encrypted. Now, by default, all as your storage accounts have encryption in transit enabled. This means that when it's transferred from as your, from the drive to wherever you are, it's also going to be encrypted. So it's not like again, I remember the first time I saw someone do this, a packet capture where they're copying something off a file server. And we were watching the contents of the file go over the wire and go, wow, that's insecure. I mean, granted it was 1999, but that this is going to mean that all you're going to see if you're looking at it going across the wire. And in this case, we're talking about it going across the entire internet. Even if you're capturing every one of those packets, you're not going to be able to read them because they're all encrypted and you won't have the keys. So creating as your file shares, fairly straightforward. What you do is in the as your portal, you select the appropriate storage account and then have the ocean pane on a file surface. You create a file share in the details pane on the toolbar, add the file share in the new file share blade, enter the desired name and quota values and then select create. You'll see us do some of this in the demonstration when we set up as your file sync. Okay, now in terms of configuring connectivity to as your files, as your storage, which includes as your files provides a layered security model. Do you want to talk about this, Sonia? Yeah, and so if you bring that screen up again, it's basically just the way that we configure these networks and these files and these virtual networks. So where this request is coming from, our storage account firewall is going to by default allow access from all of our networks. But we might want to modify that down to this very granular level, specific IP addresses, ranges of IP addresses, or even from only particular subnets in our Azure virtual network. And that really is the key of it, is about configuring these networks. So in addition to the public default public endpoint it's talking about here, Azure files gives you the option to have one or more private endpoints. So your private endpoints can only be accessible within your Azure virtual network. That's really good if you've got data that you want to put somewhere and you want your Azure virtual machines or any other of your Azure services to access that data. And then the only way that people can really get access to that data is through that application or through that other server. So you don't want anybody else to be able to directly come in to that Azure file share. You only want the resources that are within that other Azure virtual network to be able to access that data privately. And that's how people will come in and get into it. So it's really interesting just kind of seeing how we can segment off these different network segments to restrict how access is done to our Azure files. And if you want to think about it at an even more complex sort of level, if you're sitting there thinking, okay, this is great. You've told me how I can restrict access to it within Azure but what if I'm out on the internet or what if I'm in my on-prem location? Well, you can have VPNs into private networks. You can have ExpressRoute into private networks and then you can have ExpressRoute access to private endpoints. So there are a whole lot of different ways that you can actually really restrict access to this just as you would restrict access traditionally by having a file share that was only accessible via a VPN like there are file shares that we can access at Microsoft that we can only access over the VPN. You can do exactly the same thing to a file share hosting into Azure. And that kind of comes from, people ask whether the cloud is secure and they're less now but a lot of organizations that we're going in about how can I trust my files in the cloud who has access to them? And a lot of the answer to that question is kind of it's as secure as you configure it, right? Because if you're going to go and configure that access is allowed from anywhere, then guess what? Access is going to be allowed from anywhere. So you've got to understand what the options are and how to configure the stuff to meet your security requirements before you can sign it kind of answer the question of whether it's secure or not. It's as secure as you are going to configure it. And it's like, you know, the great philosopher Shrek said, it's like an onion, right? It's got layers, don't you? Layers don't care. And your layers are that even if you've got it open to everybody and their dog in the world, you can mediate access based on identity, but you can then turn on, okay, we're going to do identity, but we're also going to do network location as another way of onioning that way of that security. So again, the reality is that people sit there and worry a lot about security, but then they don't put any security on anything at all. So again, the existential threat of security is often a lot different to the practical steps that are taken to implement security. Okay, so connecting to it as your file share. To use it as your file share with Windows, you must either mount it, which means assigning a drive later and a mount point path or access it through its UNC path slash slash. The UNC path includes your storage account name, domain suffix. So here we're seeing storage one.file.call.windows.net slash share one. If user authentication is enabled for the storage account and you're connecting to a file share from a domain joined Windows device, you don't need to manually provide any credentials. Now, one of the things that's really cool about setting this up, and I know when I use as your file shares to share stuff with different members of my team, while we need videos or something like that, sometimes it's easy to just go and throw up and as your file share and go, drop it over on that. And everybody can use it except for Pierre that for some reason has some port walk in Pierre land that doesn't allow Pierre to connect to stuff. But you click connect and what you get is you get the ability to run a PowerShell script. And all the PowerShell script does is goes map network drive. So you can do it that way, or you can do it through the map network drive dialogue box. And it's not just for Windows, you can see here that we've got the option of mapping to Linux and Mac OS as well. And if you select all of these, you will get the appropriate script that can be run. So you might say, okay, we've got this as your file share. I'm gonna put this in my log on script or my startup script. And this will automatically and it'll automatically then reconnect. And usually it will go over TCP port 445, assuming that it's available and it will perform a check. So can I do that with net use? Because I don't know how many times is an IT pro I'd type a net use star space slash slash, you know, pro name to map a network drive. Is this still gonna work if I run my net use command in my DOS prompt? Yes, it is. There is a net use version of this. So you can go to all the school as you want, Scuppy, because you've got to make sure that you type in the right stuff in, but it actually doesn't work. So, look, it's an easy way of doing it. And again, one of the ways I've done this, sometimes instead of sitting up a SharePoint site and sitting there, especially when you're working with a group of people that are outside the organization, it's just like, here's your file share, run this script, go and drop it over there. Thanks. And I know there's a million different, you know, third party sites and things like that, but sometimes you just like, okay, I don't want to go and teach people to go and use, you know, file sharing solution of the day. Cause you don't know where the files are and everything like that. So this is going to provide you with at least one option. Okay, so the next one, tell me about, is you a file share snapshot, Sonia? Oh, so if you've got a Windows Server background, you might be used to shadow copies of a volume. So basically the state of the files on that disk at any point in time. And that is what is used for that previous versions that we talked about. Cause previous versions have been around for our on-prem Windows Server for a number of times, you know, for a number of years. So Azure file share snapshots pretty much the equivalent of that where we've got a file share snapshot as being this one point in time, read only copy of the data in these Azure file shares. And you can go and create these, you can then restore individual files. It's basically like a snapshot backup. A couple of little things worth noting, like you can only have up to 200 of these per share, but that's actually quite a lot. But it is really good to, I mean, you know, standard case, if I was going to make a significant bunch of changes, move some stuff, install a Windows update, whatever, IT Pro 101 is take it back up. And so if anything does go wonky, I can always get it back to the state that I had it in. But the cool part about those snapshots is the ability that I don't have to restore the entire file share. If I want to go back to that state, I can actually just go into that snapshot and get particular files back that I need as well. And to go back to, you know, our great philosopher Shrek, it's about, you know, there's layers and there's layers of data protection. And one of these layers of data protection is that you go back to see if there's a snapshot that you can restore from Y, because that's quick. And then if that doesn't work, that's when you go to the backups and you've got as your backup and then you can go and recover there and that will give you a lot more time. But this is your first port of call. And what you want to do is you have multiple layers donkey of protection that allow you to actually get back to where you are if something goes completely as the British would say, pear-shaped. Okay, let's get into some as your file sync. So as your file syncs allows you, as I said, to cache as your file shares on an on-prem Windows Server file share. Now, the way that this is phrased, it sounds like, well, you start with your file share and you're gonna cache it on-prem. But the way that you can actually set it up and you'll see this when we do it, is you point to an existing location on a disk which hosts a file share and it'll start replicating that content up to the cloud. So what it'll do is it'll populate. You can do it where you say, hey, I've got this as your file share. It's got all of this cool stuff in it. I want an endpoint on-prem. Or you can start from the, I've got this endpoint on-prem. I want this to become an Azure file share. I'm going to plug it into that and then it's going to replicate all the stuff from the, you know, the fascia at the front, yeah, all the stuff at the front. The rest of the building is going to replicate up to the cloud and that way I'm populating these your file shares. So I don't think that this is only going to be, well, the first thing you need to do is you need to take a backup of everything on-prem and then restore it in Azure. No, you do not. You can basically turn this on. So in terms of the terminology, you've got a storage account, the storage sync service and the storage sync service is responsible for taking what is actually within the sync group and managing the replication of that. So here we can see we've got decol and backslash accounting and that is going up and replicating to this through the storage sync service to an Azure file share or decol and backslash sales in the sales sync group, which is a different thing. Now you can add service to a sync group and when you add service to a sync group, they basically have a copy or a replica of that endpoint. So here's a little bit of a description of the various different parts, but the important things do you want to understand. The sync service is a resource for file sync. You create a sync service and then within the sync service, you create sync groups. A sync group is for a set of files. So an endpoint with a sync group, think of it this way. You go, right, an endpoint. So a file share on a local disk is just something like e-col and backslash accounting and then you go and create that and you share it. So when you're creating a sync group, you've got all of the files that exist under that share and so you say, right, I want to replicate e-col and backslash accounting up using the storage sync service to manage your file share, but then I want a replica of e-col and backslash accounting on this server over here. So I add another server, registered server and then create an endpoint on that server and then it'll replicate from e-col and backslash accounting here to wherever I point it to and it doesn't have to be e-col and backslash accounting so I suggest for sanity's sake that you do kind of keep the same way of thinking about it on different server endpoints over here and if you're sitting there going, but I don't have enough space for it, remember, we're going to get to cloud tiering and cloud tiering allows you to minimize the amount of space that's used. The files in case... That's really cool though because it means that I can put the files that are used by the people who are using them in those locations and not every server in my on-premises branch network needs to have a copy of that accounting share if it doesn't have accounting people in those local offices. No, I mean, that's part of what's really cool about this is that you can have all of these sort of endpoints and then sit there and go, okay, I want to have a copy of this endpoint, this endpoint and this endpoint here but the other thing that it does and we'll talk about this while we get to DFS is that there's two reasons for having multiple shares. One of them is obviously permissions and the other one was often space, that you would have issues about space so you would allow the philosophy department to be on this volume, the history department to be on this volume, the English department to be on that volume so that if the English department overflowed their volume and no one could write to it, it wasn't taking out the philosophy in the history department. Well, with all of this, you could probably use something like DFS and ultimately DFS namespaces is probably the best way to think about it to point at one share where it's just replicating and tiering and everything else is handled in the background and everybody's just navigating to the same location but again, you figure out what works for your organization. A server endpoint is a specific location on a registered Windows server such as a folder or a volume. You can add multiple server endpoints to the same Windows server computer but they must be in different sync groups. So your sync group, one sync group might be for accounting, one might be for English, one might be for philosophy but they can all be associated with the server endpoint and the cloud endpoint is the backend and the backend is your file share. So that's that thing that's got the locally redundant storage or all of the other stuff where you've turned on snapshots and every all of that cool stuff is happening up here and you might not have anybody directly accessing it or accessing it all through the endpoints but what you can do is have that backed up that's redundant and everything and if one of these on-prem endpoints if someone goes into the server room and has a bit of an office space moment with a baseball bat in the server, well, you can go and replace the server and when you replace the server you just create a new endpoint and it'll all replicate down. Okay, so do you wanna go through some of the benefits of your file sync, Sonia? Yeah, look, I think we covered those pretty well. We talked about multi-site sync in terms of where you're replicating those files to. Cloud Tearing is kind of my favorite though so Cloud Tearing is this place where you can have the full file in the cloud and then it's actually going to save disk space on your on-prem service. So the way that it works is that when your disk space is literally starting to run out of space on your server endpoints, you can define the percentage of free space that always has to be available on that server and then your older files will start to not be as fully available on the server themselves because they are still fully in the cloud but as you mentioned with that analogy about the building facade, your end users are still going to see what looks to them like the file on the on-prem server but in the background, the Cloud Tearing service has sucked the meat out of that file, left it up in the cloud until someone's going to go and request it and it will do that as a process automatically in the background to maintain an amount of available free space locally without the IT person having to do anything and it's all seamless to the end user. If they go and request that file, it will rehydrate from the cloud so it will come back down from that Azure files from the storage account back into that on-prem server and they'll be able to open it just like normal. So that's kind of one of my favorite things for saving disk space on-prem and then Cloud Backup is a scenario. So using your file sync agent to make sure that all of your server endpoints locally are synchronizing files up into Azure as a backup. Obviously with those 200 Azure file snapshots, Azure Backup to do your scheduled daily backups especially for compliance reasons if you need to keep daily, weekly, monthly backups and that's still a very valid thing that is needed in a lot of industries for regulation and compliance and then disaster recovery as you mentioned. So if you do have an issue with an on-prem server, it's really easy to provision a new on-prem piece of hardware and then again rehydrate it by copying back down replicas of all those files that are in the file share. So if you've got options there, we often have conversations on you and I about how Cloud can make on-prem better and knowing that some of these options aren't just do it in the Cloud instead, they're like do it in the Cloud as well because of all of the extra functionality that it's gonna give you for your on-prem environments. Yeah, look, I've said to you, Hybrid is really about as much or as little Cloud as you need for your organization's need but that Hybrid really for traditional on-prem administrators is about how do I extend how do I make what I've got better not how do I replace it with something new and this technology is really how do I make file servers better? How do I make them much more effective rather than how do I go and replace them and you can replace it with as you were falsing but you don't wanna necessarily retrain all of the users to go and do it that way and if everybody's sitting in an office all the time obviously we're not doing much of that at the moment but in five years time, if you're shifting huge video files around to a file server that would otherwise be shocker block with files every sort of six weeks and you can have that automatically just sync up to the Cloud and you don't have to worry about it that's making your on-prem files better. One of the other things and when I've been talking to people about this talking to our old skip manager, Donovan Brown Donovan was sort of like so this is kind of like OneDrive and I said, well it's like OneDrive except for OneDrive you've got to manually decide whether or not you're keeping it on your computer or you're shifting it to the Cloud you can't go in, in OneDrive and go right, automatically move anything off my computer that I haven't touched for 90 days automatically move anything off my computer that's really the oldest thing off my computer if I suddenly hit only 30% free disk space and when he understood that and he's like well why don't we actually have that OneDrive and I said I don't know it probably makes it a bit too complicated to do all the processing on that but that this is one of the benefits of this particular technology. Okay so what we're gonna do here is we're going to get into talking about implementing as your file sync so at the very high level you deploy the storage sync service you go to each endpoint, each windows server in this case and install the as your file sync agent. Once you install the agent you register the server with the storage sync service you then create a sync group and once you've got a sync group you go and add server endpoint so you're gonna see us do this in the demo but you can see here that it's giving you a lot of description in the as your module so obviously we've got this listed in Microsoft docs and all of these things are done in the docs but rather than sign your own eye with our beautiful voices reading through you through all of these particular items and you can do it using Windows Admin Center what I thought we'd actually do is we'd actually show you a video based demo and that's gone or snap something's gone wrong so let's just click reload there but that's fine because I've actually got this already pre done. Okay, so here I am on a very nicely prepared Windows Server endpoint, wait a sec. And what we're gonna do here is we're going to go into the as your console we're gonna create a storage account. So we're starting this from absolutely tier zero and we're gonna call it learn file sync resource group it's the resource group we're gonna go and put it in we've got our subscription we're giving the storage account a name and I'm calling it extended CAD files. Now in terms of the location Sonia when I'm doing a storage account for a backend for as your file shares, where do we recognize should put it? Well, you are setting this up to be accessed and so you don't wanna be putting it halfway around the world so you wanna be looking for locations. I go a couple of things, locations that are close to where the workloads are they're gonna be accessing them whether it's a server accessing those files or whether it's your end users put it in a region that's close to them and also consider your compliance requirements if you do need to make sure that you're keeping stuff in country you're gonna obviously wanna choose one of the regions that is in your country if that's available. Yeah, so absolutely and with this we're not gonna be directly accessed we're not gonna have clients accessing but this is gonna be replicating with service so it doesn't make any sense if we were accessing these file servers we're sitting in Wagga Wagga to have it located in the United States but let's pretend that we're all Americans and that we're actually going to be doing it in America. Now it's performance, there's no need as I said for us to go to premium we don't need to change it from a general anything other than a storage account B2 and read access to your redundant storage is fine I'm just going with accepting the defaults in this particular instance. So I click review and create it reviews and it says yep Aaron I've got no problem with that I've done the validation I'm going to go and create the account it submits the deployment and through the magic of television that deployment occurs almost instantaneously so we have a storage account so let's go down in the storage account and create a file share so we add a file share in the storage account we get the new file share dialogue box we're calling it CAD file shares and now here I've said the quota for the entire file share is going to be one gigabyte I could go much larger this is only for examination purposes but obviously most of you need a lot more than a gig sitting there on the back end and this you need to have as big as is reasonably possible but depending on the type of storage you're using the amount you allocate if you're using premium storage and you allocate a lot you're going to be billed on the allocation not how much of that allocation you're actually using so here this is just an example one gigabyte it goes and creates the storage file share great now once we've done that I'm going back and I'm going to create a new resource I've got my original stuff and I type in Azure file sync and I select that and it allows me to create an Azure file sync service so creating a new service in Azure I'm going to go and put it in the same resource group I go and give it a sync name I'm calling it the CAD sync service I'm putting it all in the same location so it's in the same resource group and the same location as that file share I go off and create that so that then creates a service that's going to go and interact with all of that through the magic of television that's all done so the next thing we're doing now is we've got that sync service and we're going to create a sync group now this sync group is going to be replicating one particular share amongst a bunch of servers so we go through and give the sync group a name we then ask for which storage account we want to associate it with bang that one and then it will query that storage account and say well which file share on that one do you want to use bang the one we created we click create and it goes and creates the sync group so now what we've got is we've got here's your storage account we've got a file share we've got a sync service and we've got a sync group so we've got all of that so the next thing we have to do is the second part of this demo which is unit H deploy as your file sync part two so surprise, surprise here we are in part two and what we're here doing is that we're now connected to a file server so all we're doing is installing the agent we can download the agent from the Azure console when we're installing the agent it asks us such as okay can this file server talk directly to Azure or can it, does it require a proxy why is it a good idea to use a proxy for a file server proxy is a good way of isolating that server from any other nasties on the internet so it means that that server doesn't have any public face and way of accessing the internet without going through this intermediary and we find that quite often with on-premises environments where you've got servers that you want to keep isolated to internal workloads only especially depending on the kind of data that you've got so this way those kinds of servers can still access the Azure files and take advantage of Azure files sync even if they don't have access to the rest of the internet unless they're going through this proxy server mechanism Yep, so as Sonja said one of the best ways of securing your servers is make sure that the servers that cannot directly talk to the internet they've always got to go through a proxy this means that if an attacker gets persistence on one they're going to find a little more challenging to exfiltrate data Okay, so the next thing it asks is how do you want to update this agent and you can use Microsoft Update and how often you want to check for updates and you can say, okay, look at it once a week we click install it installs the agent once it installs the agent we get the wizard to go and configure the agent so do I want to allow this app to yep and it goes and checks for updates and it says, okay, you're up to date what I need you to do now is I need you to go and connect this to that thing you went and created in Azure So here I'm saying, I'm signing into the Azure public cloud I sign in with one of my accounts in this case, PrimeAdmin at tarwintraders.net it will then go and query for the story sync service and so does this exist, okay with this subscription with this resource group here's a story sync service that you can use so I now go and register that server with the story sync service and that means that the storage server if I click on CAD files I've got add a server endpoint so that is my Azure file share that is my endpoint and what I'm doing now is I've got that server that's registered with that sync service I'm saying go and add the following server and I can use this dropdown to do it which is TWTML FS1 so I select that and then it says, okay now for this particular share that you wanna replicate what's the local path on that share that you wanna sync up to this as your file share that you've created so in this case let's pretend I've got on volume E the CAD folder and the CAD folder is absolutely full of data and it's been functioning as the on-prem file share so I then scroll down I click create it goes and creates that endpoint now that the endpoint's created I wait for that endpoint to finish creating and then I can configure the properties so I can see if there's any files that will not sync and now I can go and enable cloud tiering also I've got the option of offline data transfer so let's say that if let's say that I had a Zettabyte share I don't necessarily wanna transfer a Zettabyte over my internet connection so I'd go and get the special data box and shift it that way and then I could import the data there into this as your file share but in this case I'm not worried about that so I can turn on cloud tiering and here we've got the option of how much space do you wanna preserve on the volume and we'll go with the default of 20 and then we can turn on caching now what we'll say is that if a file has not been accessed in 30 days tier it up to the cloud that is only keep a placeholder in place keep a full copy on the as your file share but other than that if it hits 30 days just go and do that so whichever one of these gets hits first the tiering will absolutely happen so it could be that someone goes and dumps a really, really, really big file there and we suddenly have less than 20% space left and even though there's no file that's 30 days old every file might be at most 10 days old it will go and pick the oldest file and shove that up and tier that but or it might be that we have 50% free but we have 50% free and a file gets to 30 and if it gets to 30 days old it's automatically tiered up to the cloud but if someone tries to access that file it'll be synced back down that recalculation will occur that access will count as well it was just access so it's not gonna be like you playing Pogo where I'm like I wanna bring that file down no it's old I'm tearing you back up come back down again so we click save and that's now set up now what that will mean if I wanted to then go and add a new server I could then add that new server to that particular sync group and then replicate E column backslash CAD folder to a new volume on that server and exactly the same files would be there but what's important to understand is that the caching or the cloud tiering is gonna be for each individual endpoint so that it might be that file A, B, C and D are on endpoint one but file A, B, C and F are on file server two because they're going to manage their tiering by themselves okay so back to the content okay so the next thing we wanted to talk about well we've talked about cloud tiering migrating from DFSR to Azure file sync so in general what DFSR did is replicate on-prem endpoints to one another so you might have a set of files and folders that were on one share and DFSR was the process of replicating them to another file server or to a group of file servers now if you are using as your file sync you no longer need the replication engine what's you've got DFSR what's the other part of DFS, Sonia? sorry you lost me, what's that one? you know the other part you've got the replication engine for DFS do you remember what the other part of DFS is? the namespaces the namespaces, exactly so what a DFS namespace does is DFS namespaces with the part where you would access one share name and you just go slash slash share name and it would redirect you to the appropriate endpoint so with as your file sync you can use the DFS namespace to actually point you at your closest endpoint but you can use as your file sync as your replication engine okay so we're going to end this module with one of our favorite things which is ye all the knowledge check so we're going to ask a question Sonia and I are going to discuss it maybe you'll answer it in the chat or if you're watching this on the lay maybe you'll pretend to answer it in the chat so Sonia David at Contoso wants to set up as your files he knows he must set up a storage account first what sort of storage should he use if he is setting up as your files? I actually loved that this is one of our more obvious questions I think we did a great job with naming here because with Azure Files we talked about how queues are used for things like messaging blobs are basically just a big area where we can stick anything we like but for Azure Files we actually have this particular area called files so our file storage is optimized for files like good job marketing department with the naming on that one thank you so we're going to go with that well I mean if it's obviously named it might not be the marketing department because they would come up with something like they wouldn't call it a file share they'd call it file synergies or something like that so you know so we've got our files that's correct okay let's go with the next one when considering as your file share permissions what permissions does the storage file data SMB share elevated see this doesn't sound like it was named by an engine actually maybe it does sound like it was named by an engine maybe it didn't have Sonya which one of these do you like look if you're participating in the chat I want to know what do you think the answer is for this one what does our elevated contributor have that a normal person doesn't because when you think about it let's look at the answers I wouldn't expect anything with contributor name to only have read access because that just doesn't make logical sense knowing what a contributor can do in Azure across anything so it's really between A and B and we can see that with B B pretty much has what A has plus a little bit extra and we did talk a little bit about controlling permissions on the files rather than having control over the storage account or anything from an Azure resource level so let's go with B that they can control permissions to the Azure file share itself and we go to the scoreboard and the scoreboard says that Sonya is absolutely correct and she wins a prize so final question when implementing as your file sync after registering a Windows server with the storage sync servers what does an administrator need to do next yeah I look when I first saw this in the learn module I was trying to wrap my head around the logical progression of what we're doing here we've installed the Azure file sync agent already so I'm going to go with so what have we done we've implemented after registering the service we've gone and registered the server already let's go and see create a sync group you would be correct Sonya so what we've done there is today we have talked about as your file service we've configured Azure file services we've configured connectivity to Azure file services and we've described Azure file sync we've implemented Azure file sync we've deployed Azure file sync we've managed cloud tiering and we've migrated from DFSR to Azure file sync if you want to find out more you can go to the appropriate module but Sonya do you want to give a bit of a plug to the next session in this? Sarah? So the next module that's been covered on LearnLive is manage Azure updates so actually using Azure update management to manage updates to your Azure VMs and whether you've got a background using WSUS I've got all Windows update services on-prem this is going to be the session for you or whether or not you just want to learn how you can use Azure to keep those operating systems patched especially with the latest security updates so time there QR code go and check out that session and register to catch all the information about what that does how it works and how you can set it up and remember that we should be announcing the beta for the AZ800 and AZ801 exams early next week so if you're interested in taking those exams absolutely keep an eye out for those announcements otherwise thank you very much Sonya for working with me on this My pleasure and thank you very much for your attention we'll see you next time Thanks everyone