 Hi, I'm Amy Collier, Senior Cloud Advocate at Microsoft, here to talk about LightBit storage with Felix Melligan, Principal Solutions Engineer. And I'd like to thank you for joining me, Felix. Okay, thanks for inviting me, Amy. I'm really excited to be talking to you today. Yeah, what is this LightBit storage I'm hearing about all of a sudden? Yeah, so effectively LightBits is software-defined storage that lives in Azure on top of L-Series VM. So it could be LSV3 or LASV3 instances. We're gonna take those instances and the underlying NVMe storage when we create a cluster, then we carve out volumes to present to clients. So what this means for customers is they end up with like a sand in the cloud. So they get very high-performance, low-latency storage that they can present to their applications in Azure, whether it be on Azure VMs or AKS or even Azure VMware solution as well. So it's fully certified on AVS today. Oh, great. So you kind of already touched how it's deployed in Azure. Is there more to it? I feel like it runs on the L-Series VMs. Yeah, so how you deploy this is actually really easy and we have plenty of documentation if you wanna follow through and we've got some deployment videos as well if you wanna see it in action. So what we do is we go to the Azure marketplace because we're in the Azure marketplace today. You subscribe to the offering. We love the Azure marketplace and then we deploy a LightBits cluster based on some inputs, some really simple inputs, things like you name the cluster, you decide on a capacity and that will be the size of the L-Series instances, which because the storage for the L-Series instances is coupled with the instant size, that would take the size of the cluster and the performance. And then we deploy it inside a managed resource group with some automation in the background. So LightBits as a company can manage the cluster for you. So you don't have to worry about the maintenance of nodes or replacing any failed nodes or maintaining nodes or reboots or anything like that. We'll do that for you. So it's really hands off as a platform. And then you just have to worry about carving out volumes and presenting into applications. And obviously they're all very high performance at low latency, as I mentioned before. So that's where you have to be concerned about after deployment. I mean, it sounds great that you manage it, but can I manage it if I wanted to? If I wanna tweak anything or? Yeah, you definitely can. So we have two options and they both get deployed in the same way. So both from the marketplace, they both live within your subscription, within your VNET optionally as well. So we can either create a new VNET, we can do VNET peering, or we can deploy within your VNETs themselves. And that means that the storage gets very close to the applications. And that's how we achieve that very high performance. But like you asked, so we can manage it as a managed application and we can have access to the instances and we can troubleshoot and maintain the cluster and upgrade the cluster for you as well. Really hands-off kind of model. Hey, I just want the storage. I just want you to manage the infrastructure. All the way to, hey, I know Lightbits, I've been using Lightbits on-prem or in Azure already. And I know how Lightbits works and how it's deployed. And I have some core automation in place already that deals with maybe a node replacement or updates. And I've integrated it into my current automation workflows. We have customers that do that. So you can certainly manage it yourself. That's great to have both options. Cause I mean, honestly, I would go auto-managed, worry about some other stuff. So that's great. Yeah, we kind of wanted it to be flexible in the deployment models. Some of the larger customers, they're really into their tweaks and tunes and they understand how the platform works enough to be able to tune it. And so sometimes they just deploy it directly onto the VMs themselves. And they don't want all the automation and management and the Azure functions that we employ in the background to manage the cluster. So it's pretty flexible, depending on how deep you are into the technology and how much control you want over the cluster. Right, that's great. You can geek out or let it be. Sounds good to me. I'm a geek, I would do a bit of both worlds, right? So I would manually deploy and then build my own automation to do something that we can do as a cluster. But the devs work really, really hard. We have a really nice platform with most of the automation you'd need today, things like auto-healing, auto-scaling, it's all implemented in the product. So you don't have to worry about any of that. That's great. So you mentioned Venus. So is that, how would it communicate with the Azure VMware solution? Really great question. So even though we work with just Azure VMs, running Linux and soon to be Windows, we love that announcement ignite this year. Yeah. We also work with AKS, but with ABS, the ABS deployment is kind of unique. And if you're familiar with Azure VMware solution and when you deploy the SDDC, the Software Defined Data Center, you end up with two networks. You have a backend network and a front-end network. And the backend networks use for things like the motioning. And you can connect an express route into that backend network. And then you connect that express route the other side to the vNet. That could be a shared vNet. It could be a vNet with peering in other places. And that's how we communicate with ABS. So we use this backend network, not the VM segment network, that the clients use to communicate. Got a backend network. Exactly. That's a key thing. Yeah, that would be bad. Yeah, you don't want to take away from any performance that the clients might have between applications or even to end users. So we use that backend network, which is a really cool thing that we can do after partnering with Azure for the last couple of years. So we're really excited about that. That's great. The other thing to mention is when you deploy light bits today, when you go through the different screens of the deployment, there's actually a full screen which says, hey, check this box and light bits will automatically create that express route connectivity for you. We'll register, we'll grab your key, and then we'll, so when you deploy the light bits managed application, you're already connected to ABS and you can start communicating with ABS directly from light bits and do some slight configuration and then you're good to go to deploy data stores. That's amazing. I didn't know that about the express route and everything. Let's talk about the availability of the data stores. Customers, we need that resiliency. We're talking about that all the time. How does light bits ensure availability to the clusters? Great question. So the way that light bits passes data stores to ABS and ESXi is when we create a volume on a light bits cluster, it will be visible as a device on ESXi. So if you're deep in your VMware, you'll know what I mean. Now that device or volume from a light bits perspective is synchronously replicated to either two or three partner nodes on the light bits cluster. So when you create a volume or a data store in ABS, you can be assured that there's either two or three replicas depending on what you choose and you configure across the cluster of light bits. Now that cluster can be single zone or it can be multi-zone as well. So you can deploy a metro cluster of VMware and you can also have a light bits cluster that's split between three zones too. So we can mirror the deployment that you have with ABS. That's amazing. That's great. Then, okay, what about, so can you create snapshot like the usual stuff too? Yeah. So aside from being stream high performance block storage that we present to ABS and I will show on in there, we are the only NVMe TCP certified storage on Azure too. We work with VMware and Azure together done. We can also do things like create snapshots for the data store. We can clone those snapshots and present them as a new data store to ABS. We have QOS policies at the data store level that you can implement as well as at the MDK level. So there's plenty of features in there aside from just having highly available high performance storage too. Right. Okay. And then how about a performance with the light bits cluster, like latency wise, what's expected? So with a light bits data store, it will change depending on the size of the L-Series instance, but you're talking about hundreds of thousands of IOPS to a single data store. But you're also talking about submilisecond tail latency as well. So we'd expect the average latency of a volume to be well below half a millisecond. And then even the tail, the 99% tail latency to be below a millisecond. So that's how that's the kind of performance you can get out of a light bits cluster. We're talking hundreds of thousands to millions of IOPS out of a single light bits cluster. And then all of that at submilisecond latency at the tail. If you need more performance, you can just add nodes to the cluster. So you can start off with a relatively small light bits cluster. Maybe you want to use the L32s to start off with and have a three node cluster. I don't need that much capacity. I just want to add more capacity to my current ESXi. And then you can scale that cluster up to 16 nodes. And we can go even to the LATS instances too. So you can have 16 LAT instances deployed on the light bits side or augmenting the storage that you get from the SDBC. Oh, wow. That's huge. Yeah, it's really big. And obviously when you start adding nodes, the latency doesn't change. So you add more IOPS with every node, but the latency stays exactly the same. We still expect that submilisecond tail latency and the sub half millisecond average latency. All right, this sounds great, but it has to come up with a price tag. I mean, that's all. It does. You know, effectively the light bits pricing is pretty available in the marketplace. You can see what we charge in a kind of pay-as-you-go model. The other side of the licensing is the L-series instances. So effectively, because everything's deployed in your own subscription, you can see all the pricing. And if you get any cool discounts from Azure, they will be applied to the light bits cluster too. So if you get a discount on L-series per core, that will be applied to the light bits cluster. But the key message here is that light bits is 40% lower cost than adding the equivalent vSAN by adding a new ESXi host. So light bits is cheaper as a solution versus adding more nodes to the ABS cluster. Because ultimately, if you need more storage and you don't need to compute, you don't want to have to pay more for storage than you would if you got storage and compute. So we fit into that pricing model where we've come cheaper than adding a new node, but we're also very high performance, just like vSANs. That's great. So yeah, why add a host if you just need storage. That's exactly it. So can you use reserved instances for light bits then? Yeah, you certainly can. So you can do your page-you-go, you can do your one-year reserve or your three-year reserve instances. And that, again, will bring the price of the light bits cluster down. Because in the end, they are just L-series instances that are living in your subscription. So you can consider them just like compute instances that happen to have light bits running on top of them as a storage platform. Okay, so yeah, it's a managed service and a cluster and a resource group with running VMs with the software. And I think you briefly touched what other use cases or features can customers use with light bits? So in the end, because we work across TCPIP networking, which is what VNet is based on, but light bits can be a shared cluster, not just between AVS or multiple AVS SDDCs, because we can do that. We can have a single light bits cluster that presents data stores to multiple AVS SDDCs. You can also use the same or a different light bits cluster if you want to, to present data to AKS for your containerized workloads, or you can present data to Azure VMs. And we're actually one of the recommended solutions for running Oracle on Azure infrastructure as well. So with the highest performance solution, there's recommended in the Azure documentation for running Oracle. So if you need any other high performance, maybe transactional workloads, and you're running them in Azure and you need that kind of extreme performance and low latency that you can get with light bits up to the millions of IOPS at that sub millisecond tail, we can be your block storage for those kinds of workloads as well. That's great. Yeah, Oracle, definitely. Yeah, Oracle's a big one. Obviously, we now have Exadata in Azure as well, which is an awesome solution. A lot of customers tend to want to self-manage their Oracle and they've been tuning and tweaking their Oracle for years and years on prem. And so they want to continue that in the cloud. And so they like to use Azure infrastructure and the VMs that Azure can provide in order to run Oracle. Like the E-Series VMs are incredibly powerful machines, perfectly capable of running really, really perform an Oracle databases. And then we come in and we can provide that high performance block storage like you would have done on prem as well. Nice. Well, I'm really disappointed you didn't have like white papers and tons of slides to go through. And the people want that. Where would they go for? I know you have a great demo series out there on YouTube and we'll link to that below, but where can people find more information about light bits and AVS? So you can go to the LightBits website. So www.lightbitslabs.com and we have a whole page on Azure and we'll have a whole page on AVS coming extremely soon. But if you scroll to the bottom of the resources section of the LightBits webpage, you'll see Azure as a use case, you'll see Oracle as a use case and you will see Azure VMware solution as a use case. We have white papers, we have blogs. Thanks for pitching my demo series that I recently produced as well. So you can go watch that and then you can also access our publicly available documentation as well, which is linked from the LightBits website. It's documentation.lightbitslabs.com and there we give you all the information you need about how to deploy a LightBits cluster, how to integrate a LightBits cluster with AVS. So it's all there and it's freely available. So you can just, if you're a reader, you can read and if you're a watcher, you can watch the videos as well. Yeah, they're great videos, I really enjoyed them. And I like that they're broken up into chapters. So you don't have to watch a whole hour's worth. You can just start off and pick back up where you left off too, so. It saves my voice as well because then I don't have to talk for the whole time. There you go. For the whole 30 minutes, so I can do them in little chunks and then I can produce that way. Otherwise I'd have no voice. That's perfect. Well, thank you so much for joining me Felix. This is great. It's great to have another option for storage, especially for Azure VMware solutions since I work on that. But I know other users will love it for just Azure VMs or Oracle or AKS. And I'm thankful for you taking the time to talk to me today. That was great talking to you, Amy. I really appreciate you inviting me in today. And thank you so much for all the great questions. And I look forward to working with you a lot more and working together on the light bits and ABS integration. Yeah, let's do it. Thanks. Awesome, thank you.