 is one on one cloud and obviously there's a lot of these new clouds coming out the kind of the cheap and cheerful that you can put data on and you can imagine having multiple copies of your data put in multiple places guaranteeing that if one of them fails economically you've got your data in another place and so I think it's a very interesting model and not as expensive as you might think people have been talking about the fact I don't have a slide here people talking about the fact but yeah but taking data out of a cloud is very expensive the numbers that we get over a three-year period of time if you replicate just one time your data to a second cloud is less than five percent of the total cost so it isn't that expensive if you're just making a copy of your data to another place interesting economic theorem so what are the challenges of doing this one question how do I get my data in different places these questions of bandwidth we don't want to be tromboning in and out how do I manage very very large volumes of data how do I reduce the storage cost we talked about a little bit and then where in the world is my data typical this this new object model has advantages but this flat namespace can have billions of objects so trying to figure out which ones of these billions of objects or in different clouds can be very challenging so and then how do my applications know where my data is that every one of my applications have to be multi-cloud aware obviously these are they're pretty complex problems so we were thinking about these problems and we came up with the technology it's open source we call it Zinco there there's what we call our four pillars and these are four ideas that we believe answer a lot of those questions one of them is to have a common api across all of the other clouds we've chosen the s3 api is a standard there so you can use the same s3 api to push data to one of many clouds we've also included multiple clouds as a back end but pushing the data to the cloud in the format that that cloud is used to storing so that you can use their tools for their data and if you decide some day that you don't want data Zinco to be part of your workflow you still have access to your data in its native cloud format a metadata search capability that allows you to search all of the objects that you've pushed to whichever cloud from from a central location and then finally data management policy management that allows you to do life cycle to replicate to age data off the system so though we believe those four key things are the key elements that are necessary to manage pools of data across multiple clouds now obviously we're just doing data and and orchestration of your entire cloud environments is is a is a larger problem and we're not we're solving the whole problem but our perspective is there's a lot of tools out there to do that we're working together with some of those those people to make a nice you know mesh of these things together but a petabyte of data is much harder to move from one place than another than a workload is on CPUs so your your your EC2 your your your nova instances are much easier to move back and forth between clouds than if you then a petabyte of data can be so the system today is available is open source technology it's on the website Zinco.io you're free to try this and I would encourage you to do so this is not you may have seen the magician you know juggling chainsaws while they're running and says don't try this at home this is this is not that kind of a demo I would encourage you to try that at home and the the code is available on Zin on github this is all open source technology we do have a commercial edition and there's some some additional functionality and especially support of the technology but I'm going to drink my own Kool-Aid here and try and do a demo for you we'll see how this goes so here we have the Zinco website Zinco.io and it's got this tempting button up here try Zinco I cheated a little bit and I've already put a platform together what you see here is what we call Zinco orbit it's a SaaS based management interface so you can this particular instance is actually running in a public cloud you can install an instance on a Kubernetes cluster on your premises and then with the orbit platform you can manage the platform with exactly the same interface so what we have here is a dashboard of what's going on the system a certain number of statistics I haven't created any new buckets recently done much on the platform of late and we have information about the different locations that the data is being stored I've configured this system with a certain number of locations already we have an azure location we have a google cloud platform location and then we have this us east it's just simply local storage on the platform and then we have things like replication policies that we can create and I've created one of the I've got a bucket called Tego and I'm copying replicating that data to Azure then we have life cycle management something I talked about that you can configure if we look at the storage locations here we can add new storage locations and you see we've got our technology for storage you've got platforms like wasabi digital ocean other platforms and you can just add new locations to your system so can I ask you a question here sure sure so you're basically showing on the sorry the various locations google and and azure right they don't support s3 so how does that so that's one of the key abilities of the technology is to abstract the the storage protocol you can use the same s3 interface and there's also some differences in versioning in the data models right you so you get the same look and feel across all of the platforms and and if you go and use the data off their platform obviously then you need to understand their protocols but you don't need to understand their protocols to put your data in here so if we have a look at the browser here we got some of the different airports and we've got this new famous brandenburg airport that's only on google google cloud because it's sort of an abstraction for today and then here on shown fell I've put a bunch of data in here if I were to try and do a little bit of a search for instance here's some example of searches uh nice the demo gods are not on my side up I'm sorry wrong bucket shown fell so there we there we go all of the files that are one megabyte or larger add another to zero that all the files are 10 megabytes or larger and then no 100 megabyte files on the system search for tags that are colored blue tagging on an s3 model is very interesting because it doesn't modify the object so you can add a lot of tags to an object if I were to put tagging is pink and search I get an image that's been tagged pink now this is a very interesting idea that you can use the same s3 protocol interface to do sql like like queries on either metadata on names of files and this is a functionality that actually doesn't exist on s3 what we've done is add a small extension to the s3 protocol so you're using the same type of query uh so here's one uh key like report uh key like scale a t no no no no anyway you get the idea yeah there's a file that includes an uppercase so it's even case sensitive in the way it does searches so how about the storage location idea so if if I have if I look among my buckets here I'd set up a bucket called taego that has a couple different locations you can also access the bucket to the traditional s3 tools here I've got my taego bucket I'll have a look in there I've got a screenshot in there right now how about if I add another file I'll add uh here donald newt why not donald newt I added a gif of donald newt to that bucket now that that was uploaded to the system uh in theory that should show up on the google as your browser you can see that it created a taego bucket on there and sure enough there's donald newt that's been replicated uh to that cloud if I go back over here and I look at the same bucket on google public cloud I had that brandenburg bucket that's up there that's only got uh it's the only place that data has been stored and there's there's an image that I put in there uh if I go back and look at my taego bucket there's my two images that have been replicated to the platform so that's that's a very simple idea of the way the platform works so uh you push data in from uh using the standard s3 interface it can be stored locally it can be stored only on a remote site or it can be stored to a single additional site or multiple additional sites across a whole variety of flavors of clouds and those will over time include platforms like swift, seph uh any any platform that's s3 compatible should work out of the box today so that's the basic idea of the technology and then you have a metadata search that's available across all of the platforms using the simple sql like extension to the s3 query platform so in case the connection between the zenco instance and the google cloud or the azure cloud or the s3 cloud is dropped so it's not available and the replication starts in one sense and then fails in the other so I won't try and demo that for you to do today because I'm happy the network is working but this uses a whole set of tools using using Kafka to make sure that all the data that the data jobs that are replicating data to other platforms are kept in queue until until the data ends up being copied to additional platforms that's one of the key challenges of doing this kind of thing if you push the data to multiple clouds uh http in a way and fashion is famous for needing to be able to manage errors and so managing those kind of errors and making sure all your data is replicated over time is a very important functionality so we handle uh errors we also handle a lot of the subtleties having to do with versioning and making sure that all the replicas match and what orbit does this has that nice dashboard so you can have a very good view of your throughput and your failures and retries exactly and and we also have a whole collection of tools that are aim like I created an account here I can create a new new user say call my new user open stack I generate a new key the user has credentials and the classic aws like method where you get an open stack secret key and then that key is only shown once you can create new accounts and one one little tweak that we put on the system was was was the ability to automatically generate here you've got a cyberduck profile was automatically generated and that's what I used you can see that here to log on to the to the aws platform and create an endpoint on my system in where it's located obviously this is running on a public cloud but the technology is designed to run in any kubernetes environment so you can run this locally on your premise and push data you can run it locally on on your favorite cloud and it pushes data elsewhere or you you can have different instances in different places so we have three minutes left we can talk about a little bit more about how to get started with Zenco or we can take questions so question what's the architecture is that hub and spot or peer-to-peer so the the architecture is indeed hub and spoke so today in the current version we refer to data being in-band so data we only know data that is come in through Zenco the early next year we're going to be adding what we call out of band which means any data that's in that's written in any one of these different clouds can then be notified and the system learns about the data so that that's not exactly peer-to-peer but it does allow data to be included or just the metadata to be included all the data doesn't have to be pushed through Zenco on the system there is also a concept they're using Zencos of Zencos so you can connect in chain to or more Zenco instances and they can push to each other because what they see from from the practical perspective is an S3 API and point so just another cloud we're also seeing some customers starting to use the same technology embedding it in their solutions so that they can have via a single standardized interface access now for their solutions to be multi-cloud we've also have a guy that's deployed this on the edge a very small Zenco instance he stripped out everything that was unessential and is using it as a way to replicate data to a public cloud using the frequent unavailability of a private network to handle those kinds of replication issues so it really is a toolkit of technologies that we hope will get more and more usage we're we're planning features for the future that will include being able to do lambda like functionality or actually call out to the lambda functionality and do analysis of different workflows so to get started the very the easiest thing is to go to zenco.io slash admin and click on the google buttons to sign in and create a new account on the software as a service platform that Brad was showing and you can launch a zenco cluster mini cluster in a sandbox and play with it and the other option is to do check out the code from github and run zenco helm install on your kubernetes any kubernetes cluster that you have already running public cloud or private and and get the id that gets from the output and push it into uh zenco.io slash admin to to get your instance configured and with that i got stickers happy to hand them out thank you