 Hey, just a FYI the The github page still has the old zoom section on it. Okay. Thanks for some I'm gonna open a board question right now. All right, that's all fixed. Thanks, man No problem Hey Open EBS team. Are you guys out there? Is it Jeffrey? Yes. I'm here. Can you hear me? Yeah, I hear you So I think I had you slotted for the next session to talk about open EBS I'm not gonna ping from the mini IO guys quite yet. I'm not quite sure where they are Were you guys at all ready to be able to do it today or are you not ready at all to be able to do that? well actually not at all because One of the so Karen who is mostly on this call. It was actually three days out on vacation Yep, and so Unfortunately, no, okay a lot of problem at all We'll give it a few more minutes here, and I'm trying to ping these guys in the back end and You will see where they're at Thank you. Okay But I'm definitely assuming for the the 31st you guys would be all or the 28th to be all ready to go Absolutely. Absolutely. Yes Good morning for folks. We have the mini IO who's gonna be presenting to us as am Haven't gotten a ping from them quite yet. So I'm checking to see what's going on But we'll hang out here for another couple minutes and then go from there. Thank you Hey, Besson you out there? All right, maybe I was gonna put you on the spot here as I wait to get a ping from me guys I Wanted to maybe open it up and and ask you about the experience so far How the Any you can share with us? Well, I mean it's mostly business as usual I think the events leading up to the announcement and a lot of things that happen to Get all the source code repos Transferred over and all the administrators stuff Around it so we spent quite a bit of time on that and it's not even done you But otherwise, you know Pretty much, you know Have in terms of like collaborators and contributors Is there anything in the past couple weeks has changed in terms of people being interested or talking about different platforms that may be Relevant to wreck Yeah, I mean, I think there is definitely more interest I mean, there's been more more interest leading up to you know, the cube con and That continues to be the case We're thinking more right now as part of getting into the CNTF. We're looking into how to update our governance And you know help enable more folks to join the party There's also We're you know, the red hat team is kind of stepped up to help with And there are more folks that have expressed interest in You know supporting it commercially or as part of their portfolio of products so Yeah, I think overall very very happy with progress so far Excellent to hear All right Don't wait for me. Oh guys is does anybody out there have me anything that they wanted to chat about any questions or Any conversation you want to have? Okay in terms of the the TOC You know, we got a present here. I did a presentation to you guys on on Rex Ray And we are planning to submit that to the TOC or at least have a discussion at the TOC next week So I'll be presenting it there and then we'll request at that point that we get an invite for doing a proposal Does anybody have anything they want to chat about regarding Rex Ray any any questions or any concerns? No concerns, but was that recorded? I missed the last meeting. Yeah, it was it was actually So there I Think the actual GitHub page is a little is behind. I don't think it's been updated in terms of the recordings But these have all been recorded. So I will ask Chris and a check To to go to the archives and make sure that he gets the recordings published on the website And you can check in with a get a page and you'll be able to pull that up and listen to it Yep, okay, so we we did have an AO on the agenda they Unfortunately are a no show this and I'm not able to get ahold of them Unless we have anything else to chat about I think we can can pull this one early and then we can do Hopefully two sessions in the two weeks from now on the 28th. So we'll have hopefully we'll get many IO to present And we'll also get open EBS to present Hey, this is work. I don't know if my connection is well, but I'm coming You know, so I'm not sure that I'm working remote. So I'm not with them in the office But if you want I can start doing the big presentation. I'm not sure Did you get any connectivity from Marina or? Yeah, I haven't got any contact you're you're a little bit choppy in terms of your voice Okay Okay, thank you for anybody just joining here we're waiting just a minute for For a dial-in to get a better connection so we can hear Not so still pretty choppy I I think it's still choppy Is it a choppy for everybody? Yeah Okay, all right, this should be a bit better hopefully yes, that's a tremendous All right, perfect, sorry All right, so looks like and the guys didn't join from the office, right? Yep, not yet. Not yet All right, so Clint just the expectation wise. This is last week This is the repeat of what we were trying to do with a B, right? That's correct high level review Okay, so let me let me get the Documents I'll just go over what we were planning to present to you guys Do you do you just have it's shared the document or should I just go ahead and share the screen? I'm sure your screen. It's open for you to do that. Okay, so Let me set it up real quick here so I had a chat with the the many of my IO team about a month ago and You know we're talking about you know cloud native we're talking about you know what's going on with mini IO and You know, I think that they had a good perspective on You know, what's important in these environments and and how storage can fit in the future as these cloud native environments kind of emerge So I asked them to do a little bit of a kind of open up about what their perspective is on You know what what mini IO is going to do in terms of like what market it's going to help fill And then also on what mini IO is and you know where some of the successes have been so that's some of the context for you Sure So I'm just going to bring up the presentation and get started Give me just 20 seconds. Thank you All right, do you guys see the full screen with the opening page? Yes. All right perfect Okay, I'll get started So I I'm just going to go over and give you guys a generally overview of who mini is what we do It's a general introduction if we need to go into details and if I don't know the quite answers to the questions We can always get the right people as a follow-up, but just wanted to talk to you about you guys about How we designed it the minimum principles behind mini-o and which market we're playing in so on so forth So it will be a very high-level general. You can stop me anytime you would like and we can go into details If you whichever direction you want to take it so mean you're from the name as you can see is based on a minimalist philosophy and It's really in most of the cases when we are going and explaining this to enterprise clients It's kind of hard for us to explain how a cloud native type of a storage solution would play into the future of things and And especially in the storage world, it's kind of hard But now that we are talking to people who are natively cloud native In their mentality in their philosophy. It's so much easier to talk to a crowd like you guys so Essentially mini-o is a very simple high performance object storage. That's being designed with the cloud native architectural and design principles in mind Second page is just going over the as most of you guys probably heard and know it's about the The waves of the changes that's happening in the storage world from the disk to the appliance to the cloud mini-o was established as a private cloud alternative for an Amazon S3 type of Storage solution minio is 100% S3 compatible and we take all of the S3 Principles and S3 compliance into to the heart and we always try to make sure that S3 compliance is first and utmost and then sometimes we even Implement things that are a little bit different or we improve some of the things that S3 Didn't really go further in our thought and that's always the case with us We always make sure that S3 compliance Is is at the center of all of the things we do at mini-o this picture is just Showing you guys that you know how the growth of storage is doing especially with the growth in IOT And other things that are generating a lot video, especially IOT Generating a lot of data and the shift is Everybody is moving towards the cloud whether it's public private Multicloud hybrid cloud and all of the buzzwords that you see in the industry. This is just trying to explain the The effort is going in that direction or the trends are going in that direction Everybody's numbers are different, but that's just a directional Presentation of what's going on This slide number three is where I wanted to focus on Sorry, just I'm gonna just turn off the Notifications so it doesn't stir up you guys. Alright, so essentially Minio cloud storage we always aim for the Amazon S3 alternative for all of the things that you see in the at the bottom and Minio has very vibrant community and Just recently we hit a very in our opinion very key milestones, which are 10,000 stars on the Github as well as we just passed the 50 million downloads on Docker Hub We are very happy and very proud of our community and the work they've done and the support They've shown Minio in the last couple of years Until we came here and we are just in the early days of our commercialization for enterprise enterprise customers and Essentially we segment this into two by public cloud and private cloud And we have done a lot of work in terms of the integration up to this point We've done work with Microsoft Azure. For example, we have integrated with their managed service offerings You can essentially run Minio software on Microsoft Azure as a managed application and the way it works is if you were to write a Write some code that's working on S3 in the past and for multiple reason I call this the Walmart syndrome for multiple reasons. You were forced to change a cloud render from Amazon into Azure and your Your software is written more for S3 compliance and S3 command set And then you need to really go into blob and make sure that your blob data is Enabled and usable for your use case Then what we do is we essentially sit on top of Microsoft blob storage and make it S3 compliant and make it look like S3 server S3 storage system and that's kind of one of the neat things and very interesting to certain customer segment and They immediately their data immediately becomes S3 compliant whether they go to one cloud or another cloud So that's kind of a neat thing that we've been integrating Same thing with the other cloud vendors as you can see that's similar integration And sometimes people just because Minio is very simple and the binary is tiny binary that people Really take it for simple environments. It's just single tenant If you want distributed mode or a single tenant mode You can just run it very simply some people even run it on Amazon S3 Amazon AWS Using on a EBS and we kind of advise them not to kind of do that because there's no need if once you have S3 in a Within AWS no need to really run Minio, but because of the simplicity of Minio a certain Users especially in the community prefers to do that So on the other side go ahead Quick question if I may yeah How exactly is it deployed on as an adapter in these different clouds Azure and Google Cloud? So in terms of the Microsoft Azure we did a full-blown customized Implementation because I'm not sure if you if you have looked at the way that Microsoft Did their many services offering? They basically made it like a ISVs can do a SAS type of offering where you Lockdown it not locked down, but you pick your VM you pick your VM configuration You pick how your load balancer so on so forth and then you create that Templified very close environment. That's just for your software. So that's kind of their new version of their multiple There's a couple of different variations in Microsoft marketplace, but that's how they do it So our implementation is kind of a specialized customized packaging for Azure. It's not It's because it's just only worse than Azure because they they dictate their conditions the same for the Google Cloud We are on the launchpad or the the section that Google has a very Lighter version of this manage hosting type of environment. So they just don't they basically do a bring your own license type of You can just launch the application or put the application on top of it's much lighter Microsoft is the one that's deeper. So we just went full pledge managed services offering there So we had to do some customized work on that so when the application launches or at least the VM templates that you guys specify for mini IO in those clouds Yep, is it using Kubernetes or is it just running mini IO? That's on the right side. I didn't get to that one. We we are so flexible So we definitely do storage and we leave the orchestration to Kubernetes when in the right In most cases when we suggest to an implementation That's on the right side where private cloud we have the Kubernetes integration we have the cloud native and it's a let me so call it is light weight integration which is the helm or the file basic the file generation and then just launching the Launching the menu as you wish in terms of the configuration you put especially it like in the menu.io website You can just go in and put your configuration and create the ML file and then you just use that to launch and we live we left the orchestration and all the Work that we don't do well to outside orchestration engine and Kubernetes is what we go with and what we like to do Same with Docker swarm you just go and create a swarm file and you just use that to launch and you modify It's about 20 megabyte of binary that we have and we we just lightly integrated into these systems so that we have the flexibility for different Same with cloud foundry with pivotal same with other with mesosphere We did all these lights. I call it light integration, but it requires a lot of testing and integration work Then the users of those systems can do that and we just do what we do best doing the storage Which is durable storage doing the erasure coding doing it in a light way with no metadata And a very performance high performance way Those are the things that we shine and those are the things that we do well and we know how to do well so we don't really claim to be doing the work of Kubernetes in terms of Launching it in orchestration get multi-tenancy and so on so forth. We just Do all the storage necessities that's required and that's what we focus on I hope that kind of answers question for the other ones though Microsoft Azure That was a special case where we had to do custom integration because Microsoft managed services dictate certain conditions So on the private cloud side if you deploy on something like a Kubernetes cluster, are you? How are you backing the store? Is it like a host path? Are you using local storage? So what we we can do to a couple of ways one is let's say That this we are responsible for the durability of the story So we do erasure coding and that's what we do best in terms of Going across multiple disks or multiple servers and we we kind of provide the Durability of the storage that way so we are responsible for the durable storage but in the back end we can use XFS let's say in the local file system or Minio has like gateway Mods we call it internally gateway mode, but it can sit on top of other file systems Let's say in that private cloud range. You can see emc iso long it can sit on emc iso long or The same thing with blob actually we sit native. We basically write we don't modify any of the content Whether it's an iso long or Microsoft Azure blob. We leave the contents whether it's file system or not We leave them unmodified. So the good thing with Minio is We don't write it in a any proprietary form or format into the back end Therefore, you can access the same iso long file using the file access mode or file access Protocols or in Microsoft Azure you can use blob native protocols and you can still access the same file But in the front end you serve it as S3 compliance Storage and you do put get and all the other S3 things that you want to do it will still work And is the back-end a pluggable meaning if I deploy on Kubernetes on top of for example Google cloud Could I use the back end as GCS? The back-end of Minio you mean right? Yeah, so back-end will be we already Take the back-end and basically you need the back-end blob or GCS Storage and you distribute across them and we do our own Just similar to a think about it as Linux XFS local file system And you have six of them twelve of them multiple many of them and Minio writes the ratio coding across all those discs the same thing in Google Cloud you just provide the Google Cloud persistent discs and You use them as entities or equivalent of the physical discs and Minio will do the ratio coding across all of them and Put it into a pool So my question was slightly different. It's yep I understand that you could deploy Kubernetes on Google Cloud and then you could use PDs to back Minio question was could you also Change it to point to the the Google blob storage GCS instead of PDs So I do so I'm not so The thing is yes to start with but I gotta be careful because Google also provides their own S3 So that's why I'm kind of confused Why would you want to do that? But if you're saying that you're not using Google's S3 and you want to use Minio S3 Backhanded by Google star storage I Got a double check on that, but I'm pretty sure the answer is yes Okay, thanks. Yeah, it's a similar thing. The reason I never thought about it is Google has a poor implementation in my opinion of S3 already So that wasn't I was always focusing on using Google EBS or persistent disc So I Understand your question, but I get a double check on that to be sure sounds good yeah, so Just to continue on that all the other versions of private cloud is basically either You know orchestrations like Kubernetes or Docker swarm or Intel Jboff like we are also Trying to kind of educate and change the user base behavior into using more sold state discs provides enabled with the Advantage the the software we have that utilizes certain certain performance Factors within the chipset and especially in Skylake with Intel Jboff or any SSD Jboff just bunch of flash arrays or instead of just bunch of discs We use the just bunch of flesh arrays nowadays and you can use any of these technologies to have a super charge object storage normally in the marketplace people look at object storage is tertiary or Secondary storage or a backup endpoint. We're trying to explain to the market or change the Tinking around block storage in a way that SSD Enabled with very fast razor-coding and very fast fast throughput nowadays people are going to multiple 25 gigs or 100 gigs we believe that the Way people especially in the cloud native world especially in the newer generation of databases from couch DB to MongoDB to all sorts of other databases that are S3 compliant in the back end people are going to change their behaviors and The enterprise will come soon as well that they are going to use databases more of snapshot targets with a high performance High performance object when it's provided whether it's in private cloud or public cloud and that changes early days in my opinion but we see that as a As a trend in the market slowly happening in some of the forward-thinking areas and the other I so long I mentioned VMware as well as a recent kind of Compatible stories you can run it on top of these and as well so the other slides are about which I mentioned the mean your popularity in the development community with the slack member the number of stars that we had as well as the Pulse docker pools is a nice number to track However, it repeats as you can imagine, but still it's a very good number compared to some of the other like some of the in open source storage world clearly we are Getting some traction and compared to other projects. We're still we believe in a good place and the trajectory is very Very good in terms of where it's going. Do you guys have any large-scale deployments that you can share? In terms of the enterprise we just in the early days of our kind of commercial enterprise deployments We have a couple of POC's that's in the works, but they are not in production yet So and plus we don't have the approval from those large clients financial and others that They're shy ones. I used to be working at a financial firm and they never want to talk about it. So But we don't we are just in the early days. That's But in the community we have a lot of references and uses Just in the commercial side. We just starting that journey just to be fair Sorry Sound good. I don't have any specific that I can point out at this point All right, so basically, I think I covered some of these features to distribute it The erasure coding that we do bitrot protection. These are all the things that most of the commercial classical object storage software vendors are already doing and We have to do that already. These are baseline in our opening So you already know that Minio is as free compatible. We provide the ratio coding and bitrot protection We are also changing to highway hash in our bridge, which is a different algorithm that provides Much more performance in terms of the the way and the bit road the ratio code has been done in terms of the bit road, this is a concept in the in the object storage world where you have the disc basically mechanical aspects of the disk that going bad and changing the the parity on the on the bits and we do that that's kind of the Things that you have to do in order to be a very durable and strong object storage and distributed mode. I mentioned that which is the Which is the district most of the enterprise clients that we work with and most of the larger Deployments we use distributed a mode and that's how you got we go with a We change in the latest release we change and we lack some of that restriction But we were doing it and by to essentially you can have 16 discs and up to 8 this can be lost and you'll still have Access to your data. That's the type of series and conservative way we were doing it with some of the S3 changes as well as we wanted to relax that and introduce More relaxed kind of a policy where you can change the durability of the instead of requiring to have 8 discs To be to be the you can go less durable to have for disc or a treat this type of a scenario And we are kind of working it with the user base and the enterprise implementations to relax that the reason We feel more comfortable relaxing that kind of a requirement instead of n by 2 meaning if you have 16 discs 8 discs will be required for your parity Because SSD is the use of SSD is making it People are getting more comfortable compared to spinning rust. I call it the HDD disc People are much more comfortable in their ability to withstand and they're more durable There is couple of years of durable to difference between the HDD and SSD and There's not really good scientific data on it But a few that's been done shows that SSD is proven to be more reliable. So we feel more comfortable with these J-bofs the just bunch of flash or SSD discs that are in the service We can relax some of the stringent durability requirements that we had in the earlier days of Minio Other things that we have as we clearly have the encryption for object-based encryption That's also S3 Compliant feature that S3 has client client side as well as server side and we also working on the pieces There's a couple of we have full S3 compliance on the encryption side We're just working on two feature set that That's still in the works. That's about multi part uploads that if you have large objects You chop them into simpler at some smaller pieces and upload do a put and we're working on the encryption of those as well as we are working on the Under range gets which is a you provide a range of the When you're getting an object you provide the range of bits that you want to pull in which makes certain implementations much much more You can increase parallelism and as well as performance so and also lambda compute We also worked on lambda compute in earlier days and we can trigger as Similar philosophy that we I mentioned in terms of orchestration if we are we are the storage system We are doing storage well, but leave the other orchestration and management task in this case Monitoring or triggering events To other systems so that for we did the integration to lambda type of computing When you when you have for example multiple objects being uploaded into your system You can trigger events to put metadata if you're uploading checks if you're uploading different images You need to classify them. You need to modify them instead of Making it part of the storage system You can trigger lambda events to to do processing afterwards So I covered some of these during the Initial parts of the presentation. So I'm going to go very quick on these We worked on the private cloud different segments on nas as isilon example. I provide to you isilon has their own nas as a file, but we can run on top of isilon and untouch the Original content but serve them as objects essentially with s3 compliance in the front end jbof is the just bunch of flash We talk about that and kubernetes and cloud funds reintegration as well In going into the details how the architecture works. I'm going to go very fast on these slides Please stop me if you're interested in any special segment or any area This is exactly describing what I mentioned Application can have access to the backhand straight or through minio if they needed to do an s3 compliance storage This jbof example I was trying to Articulate this very fast, but we believe that with the advances and especially with skylake. There's a vector calculation a vx 512 within skylake chipset That's very critical in terms of Some of the instructions that we believe we can take advantage of and It improves the performance about 1 to 10 x type in our testing Especially on the hash algorithms and the way we stream and Streaming hash of the erasure coding that we do in the backhand We really like this technology and we are improving it and one of the one of the folks in at minio is very very much specialized on this and He's done a lot of work and contributing back to the go community in terms of the Way we use this and he's written a few articles and if anybody is interested I can provide the details on that and enabled with the 100 gig the melanox has this 100 gig nickard and some implementations uses 25 gig nickards once you enable the pipe with the The 10x performance of the reger coding and storage core performance And then you combine that with the resting storage Where the intel 3d nend or samsung and there's a long list of 3d nend providers in the market nowadays and Um and any of those combined with that with an open pipe We believe it's going to enable the object storage to become more of a mainstream Compared to the last generation or 10 years that goes object storage and this slide is basically trying to explain that with a Just bunch of disks. You can You just bunch of flash. Sorry using the nend and 3d you can get very high density Very high performance if you have the right core storage Performance like the things that we have done with the fast hash I would abx 512 so this is the um What what I mentioned about the lambda functions lambda functions that um, we we We implemented in terms of the hooks into different environments whether it's elastic search or red is that when you're uploading downloading or Modifying certain objects using the minio storage. You can create the similar to an audit log or a batch processing We also included the lambda functions into that mix This is um, just a slide of how we do the Vendor month read someone hashing and as I said, we are Um about to change to a different algorithm all of whatever we do We always make sure that it's backward compatible with highway has changed We're gonna make sure that everything else is also compatible. This is exactly showing I think somebody was asking me if we could use the persistent disk with the google um google storage This is the depiction of exactly whoever stores to this whether it's xfs or Blob you essentially use them in a very similar way in terms of the underlying Erasure code we call it excel. Uh, it's just the codename what we use for the Way we do the erasure coding. So if you have um two disk I mean for disk it will be two data and two parity And on the right side, it's just the the config file or the json file that we have for all of the Details what the algorithm we use the data you see is two parities to the block sizes If you have a large pile the block size is like that. Then you put it into Multiple parts. So if you look at the bottom part the number Main part one, that's what I was talking about in terms of the Multi-part uploads or downloads All right, so and the rest is basically the integration into kubernetes and cloud foundry I'll Try to focus on the kubernetes. I might say that I'm a new New be on kubernetes. So I'm not the expert. So you guys probably know all the details more than I do So I'm an expert on other areas but not on kubernetes. So just the disclosure So this is just the way we envision or the way we We provide the high level introduction to how we how we do the kubernetes integration or enablements essentially If you go to the minio.io We have a really nice simple feature where you kind of translate between the between the What you need in an s3 the access key secret key the mod Which is how you use standalone version of minio or the distributed the examples I've shown you is all the distributed version of minio So they are Most of the enterprises or larger implementations or deployments. They all use distributed mods All you do is you put your access key secret key Distributed or standalone. Let's say it's distributed and you put the number of nodes in this case let's say four nodes and The size of the storage and it generates a file for you That you can do the cube ctl create dash f and then minio deployment yaml file and that's kind of the Lightway work that's Integration we do and the same thing we have the helm install stable minio. That's Pretty much how we integrate it at a very high level But we are very open to Different integrations and if the community or if the enterprise users are talking to a specific way of doing this Or doing the integrations We are very open to that then we can work with that But i'm just covering the baseline how we do it and how we present it to The users and the enterprise community that we have In kubernetes how they should be doing. It's a simple you use the persistence volumes mapping And we kind of translate between how minio works and how you should how you would you be using your persistence volumes The second one is basically the work we did very similar to the azure Managed services work. We also integrated natively into This is more of the same Work we did but little bit Not lighter integration more deeper integration because Pivotal similar to azure is a very close ecosystem as most of you might know So we had to do certain things to Kind of integrate with that. This is just showing their dashboard and how Minio integrated into the Details of the pivotal and how a developer can go to marketplace within pivotal and pick a Minio instance and then just like we do with the yaml file and kubernetes you put your access key secret key and It's just in a closed ecosystem. That's fully integrated. That's just what it is essentially And this is just a cli example of that And the last page is this page is the piece about the high-level architectural of how you could be using it with With a use case on cloud foundry These most of these i covered so i'm just going to show you some screen. This is this is basically what we have on the managed services deployment site if you're not familiar with azure, this is how you kind of Deploy the number of VMs that you need your resource groups and essentially you have a far load balancer and multiple different VMs in our case we recommend two three for multiple for our managed hosting Implementation and this is just a high-level view of how Applications interact within the azure system, whatever they are dns or how they would be using the Virtual machines and we have auto scaling enabled in azure So if somebody is doing some processing that requires more power, then it just automatically scales fully hands-off managed services Where you need to have s3? Compatible high-performance storage against your block Hey uber. Um, yep. How about the the service broker? What what is that enabling with mini aio and I assume it's really cloud foundry today, but What does that look like from a user experience? So service broker in what? Setup or implementation. I think you referred to it in the cloud foundry setup right now I think we're talking about doing one for kubernetes in the future But what's the user experience like for that if you have it? Yeah, I haven't figured under cloud foundry and someone who's a consumer wants to go spin up an app to use mini aio storage What's the experience like? so I don't know if you're familiar so I tried this myself the user experience in pivotal is very Very much within the ecosystem of pivotal. So that's probably not it So essentially this is the user experience Um Initiation, so this is how you go in pivotal to launch a service And in this case you pick the mini object storage as service in that sign there And then you configure your instance Once it launches that instance then it becomes a server endpoint So you can just do a command line endpoint configuration from an app or a user cli Or you can just launch the ui Against this instance within pivotal. So it's a bit close within pivotal But if you go to azure, that's a better example probably for my user experience essentially once you launch this azure managed service, I don't know if I have a good picture of it, but It becomes a managed service or a service running within azure backed by All the things that you would require you would get from a cloud full-blown cloud Load balancer in front of it a couple of vms running the minio software That's enabling you to access your blob storage. That's already existing in within azure With s3 front ends. Okay, just let just to simplify and the user uses it basically whatever tools that they have or code they have cli let's say they just Introduce it as a s3 endpoint to the front end. So this managed app launches. It has an ip you take the dnf full cof domain name or the ip introduce it as a cli endpoint or you can take that ui we made it Part of this deep integration. We made this so simple for the user We have a domain name for them where they put their storage account name Since we have part of this interview process. We asked them for authorization We we need to have authorization to use to access their blob storage part of that authorization We check the storage accounts Whatever storage accounts that they already have in azure. We can enable them to reach that blob storage using a ui that minio has a very light simple ui that they can use that with their storage account and full access to their blob storage Upload download using that or just use the s3 command set and tools or the code that they already have that s3 against this endpoint, so I think there's like two distinct consumers here Or two different roles that i'm thinking about one is the provider and that's the person who's going to go to this There's your console he's going to go to You know the pivotal console and they're going to launch minio instance to make it available And then you've got your developers and your consumers who are just expecting to just get storage from from somewhere And that's right And I think that there's there's a manual approach to it right now Which is hey, here's your s3 endpoint and just plug it into your app and it's all fine and great But I think that the the service broker side of it is more reflecting that Hey a user in this space should be able to you know work with a standard api or developer should work with standard api and use that to actually like to broker or to to create buckets to To you know enable things that that that they would want or that they're going to help them store their data uh, am I Am I misunderstanding that like does minio do that piece of it or How is the bucket creation actually handled or is it just an instance? That's the Yeah, yeah, no, I get your question So we have multiple sdks that we support and you can use the minio sdks to to to do that Or the language sdks to do that just natively on top of that We have something called minio client or mc for short and mc has multiple Useful for us administration perspective or creation of buckets. For example, we have a mc bucket command We have mc upload download compare mirror copy All the basic linux command that you can envision in the new world of s3 We have put a lot of time and effort to develop this client base command line tool that's Even our competitors use it for moving data using moving object storage to file file to object type of Scenarios so in this case we have that Rich tool set which is very popular in the community and personally I really like it too because it makes life so easier from configuration to movement of data to management of data That can be used but also the apis We have sdks and on top of that if you want to do any kind of automation like pivotal needed that It wasn't created bucket example, but if you say On top of the yaml file we have on kubernetes. We needed to do a deeper integration where you want to Set rules that's going to create buckets You can just call or you can call mc or you can just we can do deeper integration. That's more programmatic Both of them are possible. We just don't know what's needed to be able to kind of Do that work or integrate it, but it's very simple for us To do that integration because we are naturally built that way and We can simply integrate but even without any official integration or Work we can use the mc command line set to do all of the basic Configuration and management of objects or buckets You can just do an mc make buckets and create all the things you need You can just do a mc config and add all of those end points We were talking about whether it's azure or isilon or kubernetes All right We've got a two minutes left till we hit the top of the hour Is there anything you wanted to finish on real quick and we have time for one question I think I like to do the questions because the rest of them is very high level and we already covered most of the things that It's the same thing with azure microsoft cloud or google cloud and amazon s3 So no need to go into details on there. Okay, so questions if i may One is um, what is the competitive landscape here? I'm not familiar with this area and uh, q is uh, what are your plans? With respect to the cncf So the first question is We are kind of a different perspective on the landscape of object storage, but if you take the Name object storage and the classical players in that area In open source, we clearly have a lot of traction But if you mix the commercial implementations of object storage, there are multiple players from clever safe, which is now a different acquired by IVm to cloudian to skeleton to seph in in a different way most of them has a commercial mixture of file and object storage, but nonetheless present themselves as object storage and They are kind of um, they've been out in the market for many years 10 plus in some cases In our case, we just focus on to the lightweight cloud native As well as cloud native meaning the architectural and philosophy how we designed it multi-tenancy you can just instantiate or Spin off an instance of menia for each tenant each department each area That nature that lightness and the strong durable storage core the implementation of irish according we did We focus on that and try to Uh, try to present it into these integrations all the ones that we talk about or within kubernetes or other areas Or that orchestration is done for storage in a very simpler way So that's what we focus on rather than comparing apple store ranges because All of those other companies started in different era and different segments So it's kind of hard or unfair to them or to us to compare them one on one But that's The market landscape if you look at object storage if you look around and see who does object storage Most of them has software or appliance. They have votes in some cases, but that's they how they play it whereas we focus on to integration into the modern era More cloud native type of environment is in a very lightweight fashion as I described because it's just in our culture and nature So that's the answer to your first question. Second question is We really like to see cloud native. I mean cncf to I believe you guys are focusing on block and file already, but there's no kind of a Driver for the object to be the de facto kind of the you know, I'm gonna refer to Maybe it's a hidden within cncf But like the open stack had all those things with cinder started first and then Manila wasn't there and Manila was created and Object was natively there whereas in cncf I see this the opposite like object Is there kind of I'm being reading and bringing it up myself with rex ray and other things and And also rook and other things happening there But object should in our belief with the changes in the market Be an adder abstraction layer or an adder driver for integration for cncf and we would like to help and contribute and Listen to you guys how you see it and help in that direction. We want object to be on the radar for the next generation of cncf Does that answer your question pop? Yeah, thank you very much. Excellent. Okay All right. We're we're past nine o'clock. Uh anybody else have anything real quick Nope. All right, through griff. Thank you for uh for presenting to us this morning. I'll have a Posted oh, go ahead. Great. Sorry. It was a bit rush. I was not prepared I hope I was able to answer your questions and give you an idea of who we are and what we do Yep, uh, and if you guys have any other questions feel free to email the uh the group and uh We'll make sure we get in contact with the bird to get an answer But uh, looking forward to you guys participating and and helping us You know build community around uh, the claudinia sort of ecosystem and and you know, especially around this this object space So very exciting stuff All right, no problem. Thank you guys All right, all right Oh, hey quick The answer in the presentation. Uh, yeah, I'm online. Yep. Yeah, could you send out the uh the slides that you have? Uh, sure. I think we shared it with clint and he was going to do whatever he does Yeah, exactly. But if not, we can definitely re-send or Okay, so that may get you. Okay. Yeah. Thank you much. Cool. No problem