 Okay, am I war yep. Hi Hello, welcome to this presentation on the infinite that storage solution My name is a Gregory Tourette scam product manager at the infinite that responsible for the file and cloud offerings that we have and If you are coming from the IT side and you're about to Choose some storage solution. You probably have to pick two of the three. All right. That's the standard solution Looking into reliable affordable or fast solution. You probably will have to Focus on two or three and you can find plenty of offerings But should it be this way and can we come with something that would meet all those aspects of triangle So let's look into the solution that we provide with the infinite at the product that we call infinity box And you can find more information about it right here at the booth So first of all talking about reliability of the solution. We provide a storage that is a Very reliable we're talking about three controller nodes within The storage frame that we offer that are interconnected through infinity band between them. They provide triple active configuration and then I'll skip the RAM and the cache for a second We use j-bots connected to those storage controllers Right, so we can go up to four hundred eighty physical drives within the storage frame and every single one of those Servers within the frame can see all four hundred eighty drives at the same time Now the data itself sits on the hard drives However, we also offer a large scale RAM Within this controller nodes that provide right cash So when the data is reading into infinity box that actually is written into RAM It is copied through internal infinity band connection to a peer server and then that knowledge comes back to the client So the right latency here is about RAM latency basically So it's very very fast for the read access read optimization We have massive SSD cache Across those three controller nodes. We are going up to 200 terabytes Or actually over 200 terabytes of SSD cache within the system so most of your random reads would go through the SSD cache level and So we'll talk about the performance in the following slides, but we provide highly reliable system with the very high performance and there is a lot of focus going into Reliability and the availability of the solution talking about functionality. What do you get with this? We are offering a unified storage solution so you can do NFS. You can do fiber channel You can do ice Kazi. You actually can do phycon for the main frame to the same box We offer very scalable snapshots both read-only and writable snapshots that you can use with our systems We provide compression Within the box we provide the effective efficient replication for the disaster recovery purposes the data is encrypted and We put a lot of focus on the manageability of the solution So everything we do within the system can be done through the restful API that is fully documented It is available for the developers and then we provide Python SDK that can be used also to manage the system The command line interface and the graphical user interface that we provide is build on top of the restful API So again, everything you want to do with the system can be done through restful API or Python SDK And then we can build things on top of it Right, so we we have Cinder driver that leverages our Cinder Python SDK We have Ansible playbooks that are built on top of the Python SDK and can be used easily to manage in provision capacity on infinity box and so on Talking about scalability Right, so this isn't a few snippets here of commands running on the infinity box This is an example of amount of volumes that we have with Cinder Right, we support up to a hundred thousand volumes on Infinity box, so it's pretty big number We do also NFS there so we can do up to four thousand NFS file systems per infinity box And every single file system, maybe it can be as big as the entire files infinity box capacity So if we're talking about 2.8 petabytes usable before compression Within the infinity box, you can have single file system as large as the box Right, so you can see an example here of 2.4 petabytes NFS file system, which is good Talking about number of files within this file system We have very very efficient implementation of the file system that we built on our own Which is used internally within the infinity box and we have basically no limitation on the amount of files that we Create what you can see here is basically the output of the DF minus I command Talking about billions of files within a single file system without any performance degradation dealing with this We can have a file of a petabyte size. We can have files actually bigger than a petabyte Again, I didn't see yet a customer who would use this in production, but we support it and Anything below that obviously So scale is definitely there talking about performance We get better results in the lab But what we'm showing here is what we get with the customers whether it's in production or testing that the customer runs in his environment right so You can see here some examples of the workloads in production where customers are getting Over four point four and a half gigabytes a second throughput with the rights or four gigabytes a second reads and all that is Going into sub one millisecond latency in hybrid system This is what we are getting in some test scenarios that our customers are running going beyond 900,000 IOPS per second with small blocks or going to over 12 gigabytes a second with Larger blocks obviously so definitely performance is there How it all applies into the open stack environment. So I showed this reference to Python SDK and Cinder integration Basically, you can get everything from a single box, right? We have the unified storage solution so you can have Our object storage with fiber channel or ice Kazi and expose it and manage it through Cinder That's obvious. We can show it here in the in the booth So all these can be created attached to Nova instances and so on in addition to that we can use our NFS offering and Use our NFS file systems as a shared storage for Nova instances So if you have multiple Nova hypervisor hosts and instead of using the local storage for instances you can use shared capacity and then is live migration of VMs between multiple hypervisors so Same thing applies for glance so you can use again or NFS export from infinity box Mound it on your glance server and store all the images there and on the shared file system We also can see examples when people are doing our block storage behind Object gateways So there is a common practice when people deploy things like Swift and other object solutions using local drives and Deploying multiple servers this way We also see that some people prefer actually going in this route and allocating Capacity from our block storage Leveraging the protection that we offer and reducing the waste of the disk space Offered by the replication of Swift or other object storage solutions. So they can bring up multiple Object storage nodes using our block storage and behind it and leverage our protection instead of wasting capacity for the replication or raise recording at the Swift level So again, you can get all of those Offerings that may serve an entire open stack cluster or other environments I do have many customers doing things that are not open stack and definitely open stack is also a good one so to summarize We are providing a highly scalable solution that is highly functional Manageable you can basically leverage our offerings with Cinder or Ansible or anything else or you can autumn Integrate our solution into whatever management system you may have through the rest API that we provide You get high performance. You get high reliability And you can get it at a really affordable price and I didn't talk about the price But you can definitely get to your booth which is right behind this Scene and talk about what we do and how we can do it for you Think I covered that in 10 minutes instead of 20. So any question? Yes Thank you, Eric So you may do software defined level, but you probably can do it in two cases You are either a very small company that can afford doing things on their own But that would be a very small scale and one guy may handle that if he has time and he likes it Or you are very large company that can afford hiring many people who would support your software defined infrastructure and Develop it and maintain it if you are somewhere in between and we see many many enterprise customers or somewhere in between They usually prefer to get a fully supported solution Coming from Somebody that can provide this global support. That's one point the other thing if you talk about the software Only we are actually a software only solution So I was talking about this hardware that we provide and the servers and disks and so on In reality all this hardware is basically commodity Servers right we use servers from one of the leading server providers. We can go with the others with our lab. We actually test constantly or various offerings from different vendors Right, but you can think about them. There are super micro del Intel quanta and so on and We are trying to basically see the best fit into our needs So if you man if you come back to our booth, you'll see that we're talking about high very high availability Something like seven nines availability for the infini box Right, that's way above everything that we've heard here for example in different presentations of people talking four nines five Nines and so on we're talking about very high performance. So to be able to provide this and guarantee this kind of performance and reliability we have to ensure a Very high Compatibility between our software and the hardware so we run multiple regressions for every single system that has been sent to the customer before that we ship a single system to a customer it runs through several days of regression tests and we Constantly finding all kinds of issues with different vendors So once we standardize on one vendor where we see Lowest amount of problems or maybe no problems after some time We start we try to go with that vendor, but again, we see cases when we may go with other hardware as well So ours our secret sauce is in the software, but we also Very carefully select specific hardware that allows us to guarantee high performance and availability other questions So you can stop by the booth B 19 right behind you when you go back you can get All kinds of white papers that we have there And you move also can scan your badge and you may win a very nice headset. Thank you