 Great. Thank you. Welcome everybody. Thank you. My name is John Gorski. I'm a sales engineer with a company called Scaldi I'm not sure if If you all know or heard of what Scaldi is or I thought they'd give you a quick background on the company So I'm here to talk about the S3 open source server a product that we just kind of released to the open source community In the month of June a few months ago But to kind of give you a background of what this S3 open source server is and how we kind of got to this point I want to give you a little bit of history and background on the company itself and what we do. So we're a Software-defined object store technology that operates at the petabyte scale We are completely 100% hardware agnostic. What that really means is that you can run on any standard XA6 server There's no so, you know, we have a bunch of relationships with companies like Cisco, Hewlett Packard, Dell But there's no restriction on what type of hardware you can run run the operator or run the application on We're purely an application. We run on top of Linux. We support a variety of different Linux operating systems like sentos, Ubuntu and Rel of course and we can run completely within user space. So that means that there's no kernel modifications We don't interrupt how you apply kernel patches or anything to that effect. We're pretty agnostic in that in that respect the platform scales from, you know, a few hundred terabytes up to hundreds of petabytes in a single namespace all all these expansions all these Activities that we do within the server are completely non disruptive. So there's no interruption of service We're focused on 100% availability reliability and resiliency We have a few you know because we work with so many different vendors hardware vendors We do have reference architectures that you can we can we can Work with our customers on and typically where we get involved is we get involved with these the sizing of the systems We don't we don't really recommend specific hardware platforms But we'll we'll we'll get involved with the sizing aspect of it where you know, we would recommend how much memory you might need or how much SSD you might need for metadata activities and stuff like so it really enables our customer to kind of Continue the relationship they have with their existing hardware vendors and leverage some of those relationships as they move forward so the The storage system itself is an object store. However, we do provide a wide variety of different protocols to access the object store So these are what we refer to as connectors the connectors are Translation or presentation layers to the application based on what the application requirements are So essentially we we have a wide variety and this and the you know, they're in three different categories There is of course object connectors, which is the s3 s proxy is our Standard rest interface and then we have a CDMI interface The softs connectors are really our scale out file system. So those are the POSIX interfaces. So NFS fuse and SMB we realize early on that there is a Lot of applications that still require this right now all applications are object ready But customers want to adopt this technology within their environment. So to enable that adoption rate quicker within these environments We made these POSIX interfaces available natively within the product and then of course we have a series of open stack drivers So swift sender glance and manila So you can easily plug this into your open stack Environment if you're operating one today All of these connectors are available in the product natively the and they can all operate Uniquely independently or they can all operate it at the same time So we're in the six generous six release of the generation of software So it's a mature product. You see as the time went on from 2010 the product has evolved over time We continually added new features enhancements new capabilities and we've continued to develop and improve these capabilities as time went on so you see Stuff like erasure coding File services have been in the product for four or five years. So they're very mature and very reliable and very resilient So why S3 so? Why was it important for us to to refresh and enhance our S3 capability? So there's there's a large adoption rate going on right now We're sort of seeing a series of application vendors whether it be backup archive or or you know lots of application ISVs are making it a S3 API available for for customer use so it was really an easy way for us to kind of Get the technology into the customer and have them use it right away and essentially we're seeing this adoption rate Really ramp up in the past 18 months or so. So it's become almost become the de facto standard for object applications and We're seeing a growing demand, you know so and another thing that we're trying to do as well as we're trying to bridge the gap between or Between the object world and the politics world. So we're we're making these protocols Available in a common namespace so have the S3 Protocol share a namespace with a posix interface so that you can ingest files Let's say via NFS and access them through over S3 or in ingest them over S3 and access them over NFS So really kind of bridging that gap between the object and file world so that you you know You can have them being accessed by a variety of different applications at the same time So, you know our new s are we refreshed S3 connector for version six of our software? We've taken a new approach With the way we deploy the software We wanted to make it as easy as possible for our customers to deploy and configure it within their environment with minimal amount of You know Tuning and whatnot so essentially what we did was we went to a Docker Container deployment model so all the customers do is download these Docker containers install them on the connector And essentially they now have an S3 API, which is which is available to to them for work So it makes the deployment a lot quicker a lot faster a lot a lot more simpler for the customer and The they could be up and running pretty quick and of course zero configuration From from the customer standpoint, so there's You know, we picked a lot of the best practices with regards to setting this thing up So it'll work for the majority of the use cases the majority of different file types there's three major components that are Incorporated into the s3 connector So the s3 server itself, so this is the actual s3 Compatible API right so this response to all the HTTP requests You know Standard s3 headers and response codes are all supported and multi part multi part upload is all supported You know most of the available or most of the popular, you know requests and calls from the s3 API are all supported Multi connector scale out was very important to us. So we realize that Performance the aspect the performance aspect of the system is also a critical. So we want to enable a True scale out Capability with this connector. So we've even incorporated An s3 metadata cache mechanism that would be distributed and synchronized across multiple connectors So now we can have multiple connectors talking to a common bucket from a single site or a multi site So so here we have a situation we can now we can have the the ring Which which would exist or the namespace which could exist in multiple data centers And you can have multiple connections multiple data centers accessing the ring in an active active configuration So it's true scale out type of performance that was probably one of the more difficult parts of the of the Development process getting the connectors to be able to talk to each other and sharing the cache and Synchronizing the cache across all these different connectors the s3 vault portion of it is really the the user management and Authentication aspect of the of the s3 API itself where so we support all of the where we're 100% compatible with with the Typical Amazon AWS Authentication model and user support user group models of Policy management and group management So essentially you can use if you're a have an existing environment that's leveraging or talking to an s3 Amazon service And you wanted to essentially point your application or start moving that that that data to more of an on-prem type of service We support all these standard AWS management tools So you don't have to change the way you operate enough to change the way you do things You just basically take your data set data stream and start pointing it to our environment And you can operate the function and still continue to manage that that service in the same way you do today Another important feature that we're gonna where we're adding into the s3 API is an HSM data mover policy of a policy engine that will essentially capture data based on your business policies and Once we identify a target we can take that that that data and we can move it out to another Amazon type of service like Glacier or what not for a long-term archive or like a parking lot type of deal So if data hasn't been touched in let's say for example your policy says one year You can have that data automatically move to another Amazon enabled service So the important thing to understand about the good about the connectors where this s3 API lives is the fact that these Connectors this connector technology is really decoupled from the storage system itself So we have the object store technology which sits At the infrastructure layer our object store or our software will essentially take all these distributed servers Tie them together into a large global namespace Provide all the data protection mechanisms the scalability resiliency And and all of the other features that are important to your your environment and then we essentially layer the connector or the Protocol layer on our presentation layer on top of that So it really gives you a lot of flexibility how you deploy system and what type of applications you can address From from you know from from the time you get it to the time you you you keep you keep moving forward If you don't you if you're not using s3 today, but you want to use it to more you can spin one up pretty quickly so the s3 Connector was still on top of the object store. You can address all these different application needs they can be in again different data centers You can have the namespace that would across data center boundaries We could provide data center resiliency so we can create what we refer to as failure domains and the failure domain can either span across a series of servers Iraqi servers or a complete data center so you can sustain a data center outage continue to operate throughout those outages and The and again the s3 connector is deployed in above that storage layer Those can be either s3 connectors. They can be NFS connectors So it really doesn't matter what type of connectors They all support the same type of deployment model plus the the scale out capability. So we've incorporated that metadata Synchronizing of the cache in all our connector technology So now you can have a true scale up performance model where we're scaling the system in two dimensions not only in a Capacity aspect. We're also scale scaling out in the performance aspect as well So now if we get to the open source s3 server, so why do we do this or why was it important for us to do this? So not everybody has two petabytes worth of data, right? So People want to learn to use this technology adopt this technology Test test an s3 API. They may they might have a test dev environment or they may have something similar to that They may have a smaller requirement their Resiliency requirements might not be as stringent as or as as critical so Going to a full scale out full resiliency type of ring was not or may not be a critical part of it So we've made we made the s3 server available again as a docker container Which is downloadable. It's a singer single instance running in a docker container Essentially runs on a single server same s3 interface same s3 API that we're using on the large scale out system but it doesn't have of course the Same levels of resiliency and scale out features. So more of a kind of a local Operating type of environment So again, you know s3 has become kind of the facto standard So we create the s3 server really to to enable Development test application that can be deployed later at web scale So you want to start testing your devs today your applications today and then take that same Application and deploy it at larger web scale. You could just easily move that to to to the new environment So it gives you a way to kind of test the compatibility of your application with the s3 API Can be deployed very quickly, you know Under five minutes. You basically just download the container The docker container you can install it on your laptop So developers find this very useful if they're traveling a lot or they're sitting on a plane or they're sitting somewhere And they want to you know, write some code or whatnot. They want to test it against an s3 API You can just download this directly on your laptop run a little environment And there you have in a little s3 server sitting on your laptop Makes a very convenient very easy for you to you know Code on the run. I guess if you want a type of thing Or if you want to have a test dev environment, you can just put up a little server a single server that your Development team can all have access to and essentially just you know, write your code and test test of that Or if you really want to use it in production, there's nothing stopping you from using your production You can get a server that has you know some level of rock capacity You know, there's servers out there to have four or five hundred terabytes of rock capacity In a single server you can you know throw that thing in a rack and essentially install the s3 Server on onto that device and now you have an s3 enabled backup In you know solution within your environment in a matter of minutes So that you know the only in that scenario the data protection Mechanisms would come from you know the server Hardware level type of protection mechanism So we're not doing it at the software so that self-healing capability that we that we provide in the large scale out Would not be available here you'd have to depend on the raid groups and and those type of data protection mechanisms that you would have in the server itself So how do you install it's pretty simple? Yeah, so Since June so we're seeing a real interest in this open source server right now there since June We've had 4300 downloads, so we're seeing a lot of people kind of going to the website There's a lot of interest in in in adopting this so they're saying well, let me try and get You know access to it and whatnot, so they're essentially downloading a via you know in a docker container Putting it on their environment and then using simple tools like cyber duck S3 command or something similar to that just to run their tests it operates exactly like a wood with a normal s3 environment so and that's kind of How they're doing so if anybody is looking to you could use if you if you're on docker what not you could just do a search for a scale the s3 opens our server and then you'll find that there you could download it installed within a matter of minutes and That's it messages download scale the s3 server and try it out again. Thank you for your time