 Good afternoon everyone Okay We're gonna start So good afternoon everyone out. My name is Eli Kapilovsky. I I work for Melnox Product manager for the cloud market and today we're gonna talk about Melnox interconnect So Melnox is a leading provider for storage and server We've been around for 13 years now based off California and Yocanam Israel we're about 1200 employees Worldwide our revenue last year was 500 million and growing and Before I start with our integration solution for open stack I would like to give you a quick introduction of our portfolio and what we're doing in the space so Melnox is building their own Silicon adapter cards getway switches cables to connect the interconnect Software to accelerate application and software to manage the interconnect We support both internet and infini men technologies With a concept we call VPI virtual protocol interconnect. This concept is really unique It allows us to communicate either through internet or in Fini band with the same fabric with no Change of cables hardware software driver. We now can support both internet and infinity band So again a very unique capability with a very fast interconnect So we support up to 40 gigi internet today and up to 56 gigabit per second infinity band or as we call it FDR 14 data rate so this is the products that support our fast interconnect and We're gonna show you this slide shows you a little bit about our customers Melnox is becoming I will say the secret sauce of many many public cloud providers out there You can see that some of the names out there some known ones some less known but The message here is that if you want to create the public cloud Service and offer your users with with a more efficient Infrastructure as a service Melanus Melanox have the ingredients to support that infrastructure to make your infrastructure less complicated more efficient and at the end of the day provide your end user with a better performance and lower cost So what is our added value we provide a variety of advantages from the application to Storage access How we simplified and integrated solution with open stack or with other? cloud management In higher infrastructure efficiency, so just by using Melanox interconnect when you're using 40 e Internet adapter Or 56 gigabit per second infinity band you could support more virtual machines per a physical server You could off load the hypervisor CPU And you could create an unlimited scalable infrastructure That will cost less when you design your architecture So a few examples on how we accelerate application. We use a very unique technology called our DMA remote direct memory access This technology is really Fascinating it started from the infinity band wall With high performance computing and now it's expanded to other areas Such as cloud RDMA allows you to bypass the TCP overhead and to copy memory from one application to another and by that reduce latency increase Performance and bandwidth and reduce the pu overhead. So three examples starting from storage. We're able to Provide a much faster storage access Over six acts performance improvement when using RDMA on top of ice Kazi, for example We can provide a much faster migration If you compare it with a 10 gig solution, we're talking about Almost four acts faster migration time. In matter of fact, we have public cloud providers that are using this as one of the advantages to show that when you migrate or when you create a new VM the downtime is zero And it's very unique and it's only because of the The fast interconnect and thirdly you have the VM VM to VM latency performance And this is a very unique technology that we now supports all multiple hypervisors So VM to VM latency using Melonox interconnect is now two micro seconds Just to compare that with other solution out there We're talking about 40 micro seconds latency when using power virtualization mode if using Melonox SRRV technology you could get 20 acts faster Late or a faster communication between virtual machine one virtual machine to the other And it's working on both KVM and ESX Now I'm going to talk a little bit about the integration we have with open stack and that's really some of the uniqueness that we have We showing at our booth today. So if you're interested to learn more, please visit our booth and see the demo that we have on this We believe there's two ways to integrate with with open flow with sorry with open stack either the you know Falsum or Grizzly We support both options So we create our own quantum API our quantum plug-in, sorry and it talks with our adapter card So our adapter card knows How to communicate seamless integration with open stack? dashboard We could do this directly through a technology we called embedded switch Which is a new revolutionary way To set a virtualized switch layer to switch not on the software, but actually on the hardware so all the capabilities that I'm going to show you is Done by some hardware offloads on the adapter card, which allows us to create a more efficient Better performance communication to set different rules and provisioning on our adapter card So one of the approach is a direct communication with the embedded switch again a technology that we saw now support Along with SRV it allows you to get a hardware security configuration quality of service We can introduce in Fini band as the underlying technology While the user is still experiencing internet tools and provisioning capabilities and That the other option is using SDN controller in this case. We're using floodlight big-switch controller To communicate between the open stack Folsom and Melonox card so we implemented a Open-flow agent that runs on top of our nick And we believe this to be very unique to Melonox and we're very proud with the collaboration. We have with week switch on this one It allows or it benefit at the end user with a lot of new exciting capabilities So we have automated nick provisioning we could do counters statistics ACLs Configuration Mac anti spoofing and many other features that are now done on the actual nick This expose us to a much scalable Infrastructure if you think about it things that start from VLAN configuration, right when you have a villain configuration on a software It's limited When you use it on Melonox interconnect on our adapter card with the e-switch configure Technology and the open-flow agent We now get unlimited resources and villain configuration and all of that is done seamlessly Through our embedded switch technology So in other words the IT manager will configure his open stack dashboard a CLI the traditional way Underneath the hood we will provision it through the hardware make it more scalable and with a better performance So that's one of the uniqueness of the solution Here's a quick look of how it looks from the inside So we have an open-flow agent that talks with the our embedded switch and we we expose the VM with RDMA Either through SRV or power virtualization Providing a much faster simpler as the end connectivity Another exciting way to leverage your Existing open-stack storage access is by using Melonox interconnect with projects such as Cinder We just we announced it last week and we have a lot of I Will say partners that working with us on this solution Where we showing that by using Melonox adapter cards and it's not necessarily meaning Fini band even using Melonox 10 gigi adapter card By using iSCSI over RDMA We were able to get 5x the performance improvement versus just standard iSCSI over TCP So again just by applying a patch one specific patch To the Cinder project or to the driver the iSCSI driver you now benefit from 5x performance improvement We find this to be very Beneficial for our customers or anyone that using Cinder With open stack so to summarize the benefits of what we provide our customers in an open-stack deployment We're talking about best performance So again Cinder about 5x performance improvement as to storage access SRV allows us to give you full bare metal performance on a virtualized nick When we say bare metal performance we melt at Melonox We mean 40 gig internet with PCIe Gen 3 So really taking the full advantage of what SRV technology could give us We could provide you with the most cost-effective solution We have ways to consolidate the network on one fabric it Migration storage management One adapter one cable could do it all and not just that it could accelerate the performance off of each of those aspects With a 40 gigi you also get more virtual machines per server without affecting the SLA You know one unique story for that will be let's assume you have 80 Virtual machines running on one physical server and let's assume that at the peak time Half of your users reaching about 500 megabit per second for each VM With one 10 gigi adapter card, you're gonna affect your SLA 40 gigi connectivity Ensure you that you're gonna get the SLA even at peak time and that's what we offer our customers Simpler to manage it through standard API is we are just introduced the project We're doing with Cinder the projects we're doing with the quantum API and the SDN approach where we showing how we can communicate To the adapter card using open-flow agent running talking to big switch controller We're gonna add more as the end controller moving forward And it's available today. It's running today and Doing all of this by using the e-switch technology on our adapter card. We could truly ensure our customers that we could show hardware Isolation and filtering that are more much more efficient and better in scaling Thank you very much. If you have any question, please come and visit us at our booth We would love to show you our demo of the open stack as the integration with Melanox interconnect. Thank you