 Thank you for joining our session on securing S3 backups against ransomware. So as we get into this, just a bit of an introduction. So I'm Michael Cade. I'm a senior technologist here at Casting by Veeam. And I'm joined by Tom. Hi, everyone. My name is Tom Anvil. I'm director of engineering here at Casting by Veeam. I was on the founding team at Casting. Awesome. Tom's going to be giving the good news later on. Before that, I have to put in some of the doom and gloom around. Well, just the reality of why do we need to protect our data against things like ransomware? So I'll get into the bad bits first and then Tom can bring it home later on. So the first thing that we want to touch on is, well, what is the need for a mutual backups? And as much as we're talking specifically around Kubernetes' clusters and Kubernetes' data, the actual premise here actually resides in any other platform. So whether it's virtualization, whether it's cloud-based IaaS workloads, whether it's on-premises physical machines, all of these play a reason why we need and we should be considering that immutability within our backup chains or off-site copies as well. So just to quickly run through what these look like. Accidental deletion, and I'm going to come through and put some of these into specific use cases later on. We know people make mistakes. And if we were going to look at the analysts who are constantly speaking to you guys and understanding what happens from a failure scenario point of view, accidental deletion is always in the top three things that are causing for data loss. Then you've got policy gaps in terms of, well, I thought you were backing this up. I thought we were backing that up. And there's that confusion gap around what is actually being protected and what actually is being backed up to a different location to prevent those failure scenarios or not being able to recover from those failure scenarios. And then probably more and definitely over the last 12 months is around security. So both internal security threats, there's a growing number of inside a malicious activity going on. And I'm going to touch on this a little bit later as well, as well as that everyday newswire coming down around external security threats and namely around ransomware. And that's really the point where a ransomware attack is going to try and get into your system. It's going to try and encrypt your data at a very one-on-one level. But there's so much more to ransomware. Rantomware is getting so much more advanced in terms of being able to understand a lot more about your environment. Like I said before, you've probably got different platforms within your infrastructure, within your environment. So ransomware could wreak havoc understanding what that looks like and it's getting even more intelligent that way. So the whole prevention is one thing, but the fact that ransomware and cyber attacks are getting more advanced in how they're attacking their victims just really resonates that we need to be thinking about how we store that backup data in that immutable fashion so that it cannot be modified both from an internal point of view. So your backup admins, they shouldn't be able to touch that backup file, but also obviously we don't want the external threats doing that either. And then also legal and compliance, especially in certain sectors, verticals, they have the requirement about, well, this cannot be touched, this cannot be modified. And whilst the other four that I've mentioned before are really important, the legal and compliance obviously in some verticals are also business threatening as well from that point of view. So just to really hit home and really touching on some of those targeted Kubernetes use cases here. So I first mentioned around security or external security threats. And this in particular, this Hildegard happened at the very beginning of this year. So we're talking January 2021. So very new, very topical. And really the premise of this is that it's going to target cloud and container infrastructure. So your Kubernetes or cloud native environment, and it wants to inject itself in there and start mining for cryptocurrency. It's going to leverage the underlying hardware that you're paying money for, especially in the public cloud. And it's going to leverage that to gain that cryptocurrency to mine that. How it does that is via deploying malicious code that targets exposed Dockerdemon APIs. It's been active since that very early 2021 that I mentioned and the potential to exfiltrate sensitive data. So whereas I mentioned around ransomware in the beginning years and years ago was all about encrypting and holding you from ransom. And the ransom was a monetary value that you would pay probably in some sort of Bitcoin. That's not always the case anymore. And actually the data is somewhat some could be more valuable if they were able to exfiltrate that out of your environment. So then we touch on and hopefully this is the worst it gets from a ransomware point of view. But then we move on to more of that insider threat, which is actually maybe a little bit more scary in that all of us work for companies that are obviously looking or have adopted cloud native or at least some sort of infrastructure platform over the years. And we probably have some responsibility when it comes to that. And just to touch on the ransomware side of that is that well, ransomware is quite a hard process in that it's all well and good being able to write the ransomware or the threat. But then you have to find an entry point into the business or into the vertical that you're looking to attack. You've then got to find some way of compromising the user accounts, finding misconfigurations or vulnerabilities to be able to gain more of a footprint within that environment. Whereas this is where it gets really quite scary is over the last again over the last 12 months, maybe due to the pandemic or not is that there's been the ability to buy network access over the Internet. And that could be anywhere between $300 to $10,000. And this kind of gives you a pick and mix option to be able to, well, I want to target this particular vertical and I want this particular access because this is where my ransomware could really work. And you might have also heard the term ransomware as a service. Well, this kind of coins and goes into that same bucket. And this is the scary bit of disgruntled employee, an employee that is leaving. And this is really playing into that internal security threat and that we should be thinking about that least privileged model, making sure that the people only who absolutely need access have access, leveraging things like role-based access control, leveraging things like policy-based access as well. All key things that we should be considering. And I think from a Kubernetes point of view, that's very much front and center of mind, but it still obviously has to be implemented. It doesn't just happen. And then we get back onto that accidental deletion. Now, people do make mistakes. It doesn't matter what platform you're on. Human errors are absolutely bound to happen. And you can prevent as much as possible around how that doesn't happen. Kubernetes is obviously largely made up of code. So it takes away that human impact or that human error. But still, there is that chance that the accidental deletion process could take hold. And in particular, I found this on Stack Overflow. So accidentally deleted my Kubernetes namespace. And funnily enough, there's not that many answers in there because there's not much you can do about that if there's no backup or the ability to bring that back. And something that I've coined here or many people have coined is this code 18. And again, this is applicable for any platform across any environment, is the code 18 being 18 inches from the screen, user error type error. So just a couple of points before I pass it over to Tom to bring in the good news of how can immutability help in these instances. And some of the preventative tasks that we can put together or help with around making sure that our operating systems, our software is all up to date. It's using the correct patches, et cetera, that have been shipped out on a daily basis, regardless of what platform you're on. But also then routinely audit those access lists that you have, making sure that still the people that need to have access have access. And probably a broader because not everyone... I consider that IT or at least the operators, the developers have a good understanding of what ransomware is, what phishing attacks look like from a wider scale. But maybe the HR, maybe the finance, maybe other business sectors, maybe even sales, if you're not in a tech-orientated company, then maybe we need to educate as well and continually educate because ransomware has changed dramatically over the last three years. It's become more mainstream and more vicious in how it attacks. And also much more intelligent in how it attacks as well. And then finally, the backup. Backup is super important and that's really what's going to lead us into what Tom is about to speak about, is making sure that we've got a copy and the final point that I'll raise is around mastering the 321 rule. And again, this methodology works regardless of platform. And it basically means three copies of your data on two different media types, one of those being off-site, air-gapped and immutable away from any malicious activity, whether it's inside out, whether it's external. And with that, I'll hand it over to Tom. Thank you, Michael. Given the number of entry points for attacks, the possibility of accidental data loss, and the complexity around governance compliance requirements, immutable backups are an important part of any data protection strategy. Object storage has become a popular destination for backups in the cloud-native ecosystem. And so we'd like to demonstrate how object storage can be used to create immutable backups. The scalability, simplicity, and the robustness of object storage has made it the perfect target for backups in the cloud-native space. Objects, also called blogs, are accessed via simple requests, which have been wrapped by a number of tools and libraries. Many databases have object storage support built in. Their backups are automatically pushed to object stores. In addition, for data protection vendors, supporting object storage as a backup target has become table stakes. You can also find several open-source projects that make it easy to backup to object storage. Kobi is my personal favorite since my team is actually contributing. Many projects treat blobs as immutable, never updating a blob one-third. This has become a best practice to help deal with the consistency provided by object storage and our passion layers. In the simplest scheme, one blob can map to a single backup. However, more advanced projects will split a backup across multiple blobs, enabling better performance. Treating blobs as immutable is an important requirement when implementing immutable backups. However, this is not sufficient since your code may not be the only one attempting to write to the bucket. The only way to ensure immutability is to use the primitives provided by the underlying storage. Let's take a look at what the S3 API provides. Although it's not the only API for object storage, the S3 API has become the most prolific and includes all the primitives needed for creating immutable backups with a feature called object locking. In order to use object locking, you need to make sure that your bucket is configured with object versioning. Based on your requirements, you'll need to set object lock and parameters in your API requests. For the most basic schemes, using S3 object locks will be similar to using any S3 bucket for backups. I'll go through the changes needed in the API requests to configure immutability. In order to use object locks, your bucket must be configured with versioning. Any request that modifies a blob in a version bucket will create a new version of that blob. The previous versions may still be accessed. Deletes will create a delete marker that indicates a delete was issued. The availability of previous versions of an object depends on the retention settings for the blob. The behavior of object locking is determined by the retention mode. There are two retention modes. Governance mode is more permissive. To bypass the retention setting, users must have permission. In addition, the user must explicitly set a special header. Compliance mode is more restrictive, and one set of retention period cannot be reduced. In AWS S3's case, the only way that an object may get deleted within a retention period is by completely deleting the AWS account. There are two ways an object can be retained. The first is by a mechanism called the legal hold. A legal hold may be set or released by any user with the put object legal hold permission. An object cannot be modified or deleted when a legal hold is set. The second mechanism is a retention period. An object cannot be deleted or modified until after the retention period expires. A retention period may be configured by default at the bucket bubble, at the blob creation time, or maybe extended via a separate API call. Let's dig into some of the APIs. When creating a blob as part of a backup, you'll want to ensure that the retention period is set long enough so that the backup cannot be deleted or modified for an amount of time determined by your retention requirements. Since a bucket requires versioning enabled, the result will have a version ID for the blob. It's important to save the version ID since you'll need to query for this version of the object. If you don't specify a version ID when querying a blob, the latest version of the blob will be returned. This works well for the simpler backup schemes, but it's not sufficient for the immutable backup use cases. Blobs may be over an accident or by attackers, and therefore blindly using the latest version isn't safe. For this reason, you'll need to track all the versions of all the blobs for your backup. In addition to being cost-effective, deleting back was maybe required for legal reasons. Deleting blobs associated with a backup in a version bucket requires deleting a specific version of the object. If the blob is currently object-blocked, then the delete will fail. For immutable backups, it's best to always specify the version when deleting the blobs. If not specified, then the latest version will be set to a deletion marker, but older versions of the blob will still continue to exist. You may not know upfront how long you'd like to retain a blob, and a blob's original retention window may not cover the lifetime of the backup. This is especially a common occurrence with the duplicated products where a blob may be referred to by multiple backups. To handle these situations, you'll need to extend the retention period of blobs. Using the put object retention API to reset an object's retention settings. Since this will create a new version of the object, you will now need to track the latest version ID. The previous version will still exist. Hopefully you'll now understand what immutable backups are and why you may need them. The APIs provided by S3 make it easy to implement simple backup schemes and make it possible to build more advanced schemes as well. Whether you're interested in building immutable backups yourself or would like to take something from off the shelf, we'd love to hear from you and understand how immutable backups can fit into your data processing strategy. With that, we'd love to take your questions. Thank you.