 Good afternoon, everyone. My name is Kiran, Srinivas Murthy. I'm going to present a topic on hyperconvergence and also talk about how Maxta has integrated with OpenStack. So in general, you guys might have heard quite a bit about hyperconvergence. I'm assuming either here in the trade shows and everywhere else, there's a huge infrastructure built upon hyperconvergence. So what is hyperconvergence? If you really look at it, the idea of bringing all these different components together and having it built in a data center with just industry standard servers is the key for building a hyperconverged model. So but what does it provide us? So one of the key aspects is provides the abilities of very seamless scale up and scale out capabilities. It provides us with shared storage capabilities. It gives us the ability to perform load balancing, provides us the ability to do load balancing across them. And it supports physical as well as virtual environments. So the next question is, why would I want to deploy hyperconvergence? What benefits does it bring me? First and foremost, it really simplifies the whole IT administration and how you deploy the resources. How do you procure the resources? It makes it really simple from an end-to-end process point of view. That's one key aspect. The second aspect of it is, it gives the ability to build a Lego-like environment. Hey, you need more compute resources? You add more compute resources. You need more, say, storage resources? You add more storage resources. You don't have to plan anything upfront. Today, one of the biggest aspects of data center management is to envision what I need to procure in the next year or two years or three years. You don't have to go through that process anymore. It gives you a very seamless, small building blocks that you can use to scale as you need to scale. The next aspect, which is pretty important, is the ability to run on any standard servers. We all know that the server technology is making it really worthwhile to go deploy hundreds of them in a data center. It gives you the performance that you need. It gives you the cost efficiencies that you need. And today, especially Intel servers, you have 18 cores on a single socket. So you buy a dual socket. You have 32 cores. You have enough CPU resources. So using those to give you all the capabilities, you get a much cost-efficient and a cost-effective solution. That's one of the key aspects of being able to build a hyper-convert solution. So why now? I mean, what does it make hyper-convergence relevant today? Why not five years back? Why not 10 years back? Why not two years from now? What makes it? And also if you look at the evolution, how has storage evolved? If you understand these concepts, it makes a lot of sense as why hyper-convergence makes sense today. If you look at the evolution, why would customers are trying out different things? One of the key aspects is to simplify management. How can I easily deploy resources? How can I easily plan my resources? So if you look at your traditional storage today, enterprise storages, EMC, NetApp, HP, Dell, it doesn't matter. The idea is it takes a long time to go configure them. You need your server resources. You need to coordinate with the server group. You need your storage resources. You work with your storage group. You need your network resources. You work with your networking group. Bringing all these together takes a quite good amount of time and sometimes months together. So companies evolved and they said, hey, what if I can make this simple? What if I give you the entire rack? It might be from different vendors, but I package everything in a single rack and give the rack as a solution to the customer. Companies like VCE came into existence with their VBlock, with NetApp, with FlexPod, with SmartStack. So all these convert solutions makes it simple, but it's not good enough. It still takes weeks before you can configure it. So hyperconvergence was evolved from there to say, I'll make it really simple. It is one building block that you go deploy and install. So installation is in seconds or maybe even minutes. So it makes it really simple for you to go deploy. So the other aspect is, if you think of why now, why not a few years back or why not moving forward? One of the key aspects, as I mentioned, is the availability of CPU resources. I have so much of resources that I can use today to give all the storage capabilities. Previously it was hard. I mean, there was probably 12 sockets with four cores in them. And having allocation for your compute resources and having allocate additional resources for storage was tough. So it became harder to combine all of them together. But now you have abundant of resources to really package all of these things together to give you the capabilities that you need. So if you sit back and think, how am I going in circles? So 20 years back, everybody used direct-attach storage. Servers had a bunch of storage and you deployed your applications on top of it. And then people and end users said, ah, it's not working out. I need a sand infrastructure. It gives me better availability. It gives me better resiliency. I need to move away from direct-attach storage. But now if you look back, the approach is we are going back in circles to say, hey, direct-attach is the right way to go. So but now what is difference? Well, why now? What made it different? If you look at, there are a few aspects that's really different. One is, one big aspect is virtualization. It brings a whole different paradigm to your whole infrastructure. Makes it simple to deploy. Makes it easy. Gives me the ability to be agile. I can move my VMs any way I need. I can move it to any server I need. So makes it really different. Before, you could not do it with your classic applications. You could not move it from server one to server two. It did not even provide you with the availability, right? Hey, if something failed on server one, I could not use my application. I had to wait till my server came back up. So today with virtualization, it's some other server will pick it up. You can move your VMs over. Things change. The other aspect that changed was the whole global namespace thing. Hey, it's one global namespace across all the servers that I have. So it doesn't matter whether your VM is running on server one or server 100. It just doesn't matter, right? You get the access to your application. So some of these things really made it different compared to what it was a few years back. So it's not more, it's something more than just a direct-attached storage. It's day and night. So they are completely different from each other. And the other important aspect is also around having the ability to scale out gives you or eliminates the need for the storage islands, if I may say, right? Previously, if you really looked at it, I had my storage for my, say, a mail server. I had my storage for my database servers. So they were all individually customized to those needs. So today, that's not true anymore. I have a global pool of storage and the software is intelligent to handle what it needs. It's intelligent to understand, hey, this is a high performance virtual machine or a high performance application. I need to store it here. Or this is more of an archival-based application. I'm going to move this data to a different tier. So the software is intelligent enough to deal with these things. That's what makes it unique and that's what makes it easier to also use direct-attached storage versus having to buy an enterprise-class storage array. So the other question that comes into mind is, can I use it across all use cases? Is there a specific use case that I need to worry about? Or is it pretty much across the board? If you really look at it, storage in general is pretty horizontal, right? It serves multiple use cases across different industries and different verticals. So if you look at from hyper-convergence, we have seen it deployed as primary storage, which essentially means I'm running my business-critical applications. It could be your exchange servers. It could be your web servers. It could be your database servers, essentially providing the ability to run your mission-critical applications. And the other aspect is also around having a remote or a branch office deployment. See, a lot of times customers, they come up with newer offices and so on. So in these scenarios, would you want to go deploy your traditional storage array? Probably not, because it needs more management. You need to have a person who manages the storage. It's harder to go if you have 100 locations, you need 100 people managing these in 100 different locations. It becomes very hard to manage the storage array. Versus saying, hey, it runs a standard servers. You manage a VM. Anyway, you need to manage a VM. So there is no additional resources involved. Makes it more a logical use case to start to use hyper-convergence in that space. And a few other, like, virtual desktops. When you hear about hyper-convergence, moment you think of virtual desktops. So what does it mean? And why is it more prevalent in a virtual desktop environment? If you look at customers when they deploy newer technologies, they want to deploy it or use it for a specific use case that they are going or a new project that they are going to embark on. They don't want to disrupt what they have. And virtual desktops is a pretty interesting case in that sense. I'm going to virtualize my desktop environment. I don't want to use what I have, but I want to use something new. I want to try a new paradigm of deploying my storage in the data center. So it gives you the ability to try out hyper-converged models. So it's more of looking at from a perspective of what makes sense to start off as a new project versus trying to go replace what you have. And some of the other aspects are for primary storage, you would replace what you have. So it depends upon where customers would like to deploy. And the other very interesting use case is from a service provider model. So if you look at some of these web-scale companies today, like Google's of the world, Amazon's of the world, or Facebook's of the world, they are in this model. They use standard servers. They provide you with all the capabilities. So if you look at an enterprise, they are looking at these models and saying, if a managed service provider can deploy this model and be cost-effective and make money out of it, because in a traditional enterprise, the IT or IT in general is a cost center. It's not a revenue-generating unit. But if a service provider can generate revenue and be cost-effective, why cannot an enterprise solution do the same or enterprise do the same? So with that in mind, a lot of these enterprises are moving into this model to say, hey, yeah, let me go deploy this. Let me deploy it on industry standard servers. And for a managed service provider, it makes a lot of sense because that's the way for them to reduce their cost. Everybody wants to have a lower cost alternative to deploy their entire solution. So it makes a lot of sense for a service provider to go deploy on standard servers and at the same time, not to compromise on services. You would not go to a service provider if you don't have reliable service. You still need all the services, but at a much lower cost. So running it on standard servers, where we know the price of servers goes down every year, it makes a lot of sense to use it versus trying to go buy a standard enterprise storage array. So knowing a little bit of background about hyperconvergence, I just want to give you a brief overview of what MaxTi is and what MaxTi does. So MaxTi is a hyperconvergence company and essentially what we deliver or we maximize the promise of hyperconvergence and what does that mean? So if you really look at it, one of the key aspects is in a hyperconverged model, we should provide choice, whether choice could be in terms of the hypervisor that I use, the servers that I use, the drives that I use. So any of those, right? So we provide the ability to have a choice in all these different aspects. So we maximize the choice, we maximize simplicity. The way we do it is we have eliminated the abstraction or we have abstracted out the concept of managing storage, no more storage management. You manage a VM. It's pretty similar to managing CPU and memory resources. You don't manage CPU, you allocate CPU resources. You don't manage memory, you allocate memory resources. Storage is similar, you allocate storage resources. Oh, so what happens when I need to add more memory? It's a service action, you add memory. What happens if a drive fails? It's a service action, you replace a drive. So the idea of managing a resource is very similar to managing your VM resources. So we maximize the simplicity of managing your entire infrastructure. And the third aspect is scalability. Hey, I need to be able to scale when I need and what resources I need. I don't need to buy, say a large enterprise storage array today expecting I would have to grow to this stage. I need to invest for what I need today, knowing that I can add resources when I want. And that's essentially what we provide. You can scale up storage. You can scale up compute. You can scale up all the individual components in your data center, as and when you like. If you want to, you can replace your older lower capacity drives with a newer capacity drive. You can add drives to your empty drive slots, or you can add a new server. So you have multiple ways to scale your storage needs. And the last is cost, being able to provide the most cost-effective solution. Standards, industry standard servers, all the way from branded servers to white box servers. So you can choose the server technology that you want and that you like, and you can have all these storage features and capabilities delivered on top of it. So a quick view of an architecture of what we do. Say, these are all just standard servers. As I mentioned, we aggregate the storage resources across all these servers out in the data center. And of course, we leverage SSDs, whether it's SATA-based, PCI-based, NVME-based to provide the performance what applications are looking at or applications need. And we used spinning desks for capacity. And at the same time, deliver all the storage services. All the way from snapshots to clones to compression, thin provisioning, data deduplication, you get all the enterprise class features. And support for multiple hypervisors and multiple management products, including OpenStack, that gives you the ability to completely deliver all these on top of OpenStack. So how does Maxta integrate with OpenStack? And what are the things that we have done for OpenStack in particular? Maxta has developed two main integration points. One is through the Cinder driver support and the other is through the Nova support which enables customers to manage not just the storage aspect of it, but manage a VM. So even within an OpenStack environment, I can go take a snapshot of an instance. And taking a snap of an instance is, for us, is a storage metadata operation. We could do it in seconds or milliseconds and with zero capacity overhead, unlike a traditional snapshot in an OpenStack environment which is almost like a full copy of your entire VM. So it gives you all the capabilities at the storage layer by being able to use your standard OpenStack environment and being able to provide a very cost-effective and an efficient storage platform. And also the other aspect is HA, delivering an end-to-end HA product including for live migration. I can take a VM running on node one, move it to node 50, and you still have access to the VM and the data for this VM. So you don't have to worry about where my data is, what do I do, and where would I want to move my VM? So you don't have to worry about any of these aspects. You have your entire pool of storage that you can use and your VMs are running, leveraging the storage pool. So to summarize what we discussed in the last 20 minutes, the key value that MacStuff brings to the customers is number one, choice. It helps customers with the ability to pick their server, pick their hypervisor, pick their storage device so they can use any of the devices that they are currently using or they can move to the latest technology when the server vendors or the hardware vendors release on day zero. It maximizes the choice. And the second, maximize the simplicity. It provides you with the ability to manage a VM and not have to manage a particular storage entity. Everything you manage is a virtual machine. And the third is very cost-effective solution. It lets customers not only maximize on their CAPEX savings, but also helps them to maximize on the OPEX savings without compromising any of the storage-level data services. That's the key what MacStuff can deliver to end customers. And that's essentially what I wanted to cover for the time for the last 20 minutes. And thanks for taking the time to listen.