 Okay, we'll get started Hi, everyone. My name is Kamesh Pamaraju. I'm a senior product manager at Dell Focused on OpenStack mainly we've been building end-to-end solutions with OpenStack for over three years With me is Alok Prakash is also a product manager with Intel cloud platforms group specifically So today we're gonna cover a couple of different things We have been working with Red Hat over the last year or so and we have been introducing Dell Red Hat cloud solutions with co-engineering with Red Hat Building reference architectures and specifically focusing on certain use cases So I'm gonna talk about that and then go a little more into detail about what the solution offers are today with Dell Red Hat reference architectures and then Alok is gonna talk more about the service assurance administrator the SAA technology that we are working with Intel very closely with right now to bring it to the market So that's kind of the overview So without further ado, let me get started with what we are trying to do with Red Hat So so the focus for us is really to address the gaps I mean not I'm sure you all know that there are some pretty clear obvious gaps with OpenStack You know things like networking things like upgrades just making it enterprise ready for the mainstream enterprise Right, so you can't just pull in the open source bits from trunk and just use them in production I mean clearly that's that's a recipe for disaster So we are working with as you know Red Hat just to give you a quick background How many of you are familiar with Red Hat distribution? Anyone okay, so well what Red Hat is effectively doing is is taking the upstream version of OpenStack trunk and then creating a Effectively a distro right just like they did with Linux right they like like the rel operating system Think of it as a similar thing with OpenStack. So it's called rel OSP OpenStack platform It's certified hardened tested. It's supported by Red Hat So it's the all the usual things you would expect from an enterprise grade, you know open source Distro, that's what Red Hat's doing and what we're doing with Red Hat is effectively bringing an end-to-end solution That includes hardware networking hardware server server platforms Switches reference architectures end-to-end validation, etc. So that's it's really all about giving that Experience to the end user which is like buying commercial software. You just buy it and it works So that's really our goal right and we are we are focusing on specific enterprise use cases, which I'll get to in a second But that's really what we are trying to do together. So let me give you a little bit of idea of what we're working on together You heard in the keynotes this morning, you know keynote The whole issue with identity integration across multiple data centers is an open issue today in OpenStack So we're working with Red Hat and doing some core OpenStack Co-engineering and improvements and they include things like keystone identity integration with active directory in LDAP. That's an area We are focusing on Cinder blockstores has come a long way in the last two or three releases So what we specifically enabled for this release is what we're calling? Multi-backend support so as you probably know Cinder can you can attach multiple backends to to Cinder, right? So in this particular case, we are enabling some of the storage technologies from Dell Like Equalogic which are traditional sand technologies, but we we've also been working with ink tank Ceph, which is now acquired by Red Hat, which is a block storage scale out software solution You can have all of those things simultaneously with the solution So if you wanted to use traditional Equalogic sand array For certain use cases and you want to use Ceph for some other use cases You can actually have both of them simultaneously supported So that's really the the multiple back-end technologies depending on your quality of service requirements and you know your SLA and tiering requirements Etc. So you can do that We're spending a lot of time to make sure that Whatever is deployed at a customer side is highly reliable and is kind of we can stand behind that using the tempest Testing so we have an automated suite of tests that we run when we when we actually You deliver the solution to a customer and after it is deployed at the customer side, we run it again So we have some comparison baseline against you know, how the system is doing So that's one and there's a whole set of work going on around deployment, right? So as you probably know, this is one of the biggest pain points still with an open stack There are a lot of different ways of deploying it specifically with Red Hat We're working with what is called open stack foreman installer. It's called foreman And it's now kind of morphed into what's called stay puffed Which is a layer on top of form and which allows you good user interface and easy Provisioning of your networks and things like that. So we're doing a lot of work there We also have some engineering resources working on triple and ironic For next generation deployment today what we have in the solution is foreman And we're also looking at automated high availability deployments scalability upgrades all pain points We all we're all aware of today But that's it's not just focused on open stack itself, you know, you have Seth you have the installer pieces you have OSP You have HA all of these pieces have to work together and they have to be deployed together They have to be tested together They have to validate it together and that's really what we are working on for deployment tooling Of course, we cover a lot of different hardware options Powered servers Dell networking and Dell storage technologies as I mentioned so a lot of co-engineering work going on with red hat And these are some of the highlights there reference architecture like I said we have a whole Reference architecture that we have worked with red hat on it has validated, you know server configurations Networking designs all the different networks you need for storage for provisioning for management For nova networks for apis. It's all defined in this by the way This document is available on the on the web. It's available for free download It provides you the entire framework how to set up your hardware. What are your rate configurations? What are your j-bar configurations? What do you need for your storage nodes? What do you need for a compute nodes? How do you connect all these things up? Where do you load your your installer tools? What kind of connectivity do you need to that all of that is in here? So including the operational infrastructure, it's a great starting point So if you're building out your open stack cloud you start off with a reference and I'll talk about a couple of Quick starter packages that we have already built and we're bringing to market So you all you have to do is buy it and turn on the switch and it should work, right? So that's a starting point and then you can take it from there You know your things like sizing options scalability options expansion options I'll talk about in a second, but it's a great way to get started with open stack and not get you know caught up in the complexity and and the lack of visa views Ceph similar thing how many of you have heard of Ceph think tank Ceph So it's a software-defined storage solution. We've been working with the ink tank guys for over two years now Now we have it completely integrated into the solution again co-engineered and validated We have specific optimized configurations for storage nodes And then all kinds of and it's all certified and deployed automatically using our tools So it's it's a joint so if you think of So if you think just of open stack you're missing the point because when you have to build out the whole Cloud infrastructure you're talking about storage. You're talking about networking You're talking about the different kinds of back-end storage systems that you have to integrate You're to talk about it You know other pieces that you need for the cloud and all of these things just don't come out of the open stack Community you have your own stuff, right? So there's a lot of integration that has to happen and that's what our engineering team does So quickly I wanted to talk about the offers The solution components consists of a number of different things. We obviously have the Dell reference architecture, which I just mentioned Driven by the server technologies and and networking we use the forced and networking switches, which you'll see in a second Backed by Dell professional services and Red Hat professional services for deployment and consulting And any engagement if you want to build something of with a very large scale We can help you with the help of Red Hat to kind of work out the architecture You know can start with RA as your starting point But you can build out very large-scale clusters and of course there's Dell pro support for all the gear you buy from us It's it's driven by the Red Hat Enterprise open-stack platform, which is kind of the commercial distribution from Red Hat And the other value-added components I already mentioned Seth and today we'll hear from a look We are working with them on the Intel SAA technology, which is a way for you to kind of get enterprise grade You know SLAs and things like that which he will talk about so it's a it's a full end-to-end solution That gives you all these pieces if you don't otherwise you had kind of deal with this on your own, right? You need the expertise and technology to be able to pull it off So the three systems that we have today It's it's kind of an easy way to get started we have a proof-of-concept system We have a pilot system and we have a scale-out system three different kind of packages if you will prepackaged bundles or pre-configured Options or choices that you can buy and I'll get into the details of this But effectively if you're just starting off with P with open-stack and you want to kick tires You want to learn about what it is? That's the best way to get started. It's a very simple low-cost Quick quick to get going. It's great for you know concept testing prototyping Deploying some you know workloads and kicking tires. It's cheap. It gets started you very quickly But once you're ready and you feel like open-stack is right for you then we have the pilot system It's great for mid-scale production. We've done a lot of different things in here We have lots of sizing options. You can go up to three racks up to 2,000 VMs And multi petabyte, you know storage systems with this 10 gig networking high availability Software you know driven storage so you know if you're ready you move from here to here and now once you're here and you want to kind of really scale out to multiple data centers and and you know Identity across all of those data centers and availability zones then you're really talking about scale out, right? And this is not a packaged option as you can imagine right this requires a lot of let's sit down and talk about what your requirements are and Then we sit down and kind of work out using customized Engagement with the customer we figured out what that looks like So a lot of different things you can we can you can run production workloads We have a choice of power edge R and C series which are our servers that are certified by Red Hat And high availability 10 gig networking, so I'll get into a little bit of details of each of these In a bit so this is the POC solution just five servers single switch It gets you're going very quickly at very low cost. It's rapid on ramp, right? You just want to get started open stack. That's the way to do it. Like I said, it's power edge R620 series servers just five nodes. We have an open stack manager an open stack controller three Nova compute nodes and then the storage is on the controller node So we don't have separate storage in this solution. It's it's you can try it You can put LVM storage in this and try it out But you know you get up to 90 virtual machines for terabyte of storage It's great for you know bringing one of your you know application workloads to try out open stack so great starting starter kit and Low-cost so that's really what this is all about But when you're ready for serious stuff production stuff, that's when you get into the pilot, right? And you can start with a single rack three three-quarters of a rack You know base configuration comes with 13 nodes 13 server nodes And you can really size it up to up to three three full racks Which gives you something in the order of 2,000 VMs and depends on what you want to do You want compute or storage or what's your use case right if your use case is storage heavy You can kind of come you know populate the whole thing with storage Nodes and you can easily get up to two petabytes and that can be driven with Seth or if you have a compute, you know focused infrastructure, then you can Basically populate the whole thing with compute servers and you can get up to 2,000 even more than 2,000 VMs And it's it's a very very robust configuration because we have we have the 10 gig Networking s48 10 switches and they are redundantly Connected to the to the server nodes So you have bonding networks and a number of other networks there and and more importantly this has a cha, right? So your your controller nodes are are active active So if one of your control nodes goes down you still have you know uptime you can still keep it running same thing with the database right we have Galera and we have You know Maria DB as our database that gives you the cluster database active active capability And again like I said earlier you can run multiple storage back ends So with with this configuration you can get equal logic, which is currently certified It's a sand device from Dell and you can have Seth together in the same in the same solution Of course if you have other things like net app or things like that that you know You can certainly take that and customize it and integrate it with a custom solution, but this is a great production kind of environment where you can put some serious workloads and Then of course the the large scale Oh, this is the sizing options like I said you can scale up to three You don't you can kind of scale up in increments of one node, right? You can add a single compute node or you can just add a complete new rack and Keep expanding your options that way We have two options seven six twenties and seven twenties for compute and seven twenty XD For storage you can get up to two thousand virtual machines and again if it's just storage you can go up to two petabytes So it's a nice, you know sizing and expansion options that you can kind of order up front and Get the system that you're it's right sizing for your requirements effectively and price performance and and the deployment is all Pretested for you, so it's you know less than a week to get this whole thing up and running on site And we have the alert hat working together to help you deploy with that Scaleout is really where massive scale, right? You're talking multiple data centers multiple multiple racks It's obviously consultative services engagement You know we have a whole choice of RNC servers And this this is where we usually end up even with so just just to let you know The preconfigured POC and pilots are great But from my experience most customers don't even buy that out of the box. They'll say great. We love it But we have some other requirement, you know We want to kind of take this and connect it to this or I want I have active directory or LDAP or or something else So it usually turns into some like some type of a consulting type of thing We have customers who bought POCs and pilots out of the box. That's happening, too There's a there's a robust set of customers that are that are kicking tires right now But the point here is we have a full range of solutions starting from the smallest to the largest and it's all driven by enterprise enterprise-grade software from Red Hat and you know proven Dell servers and and Networking and storage so one other thing before I before I handed over to a look is As I said earlier, we're working very closely with Intel because we believe there are things that The service assurance administrator is bringing to the table which makes it even more Suitable for large-scale deployments and SLAs and things like that. So with that I'm going to hand it over to a look Thank you, and we'll take questions towards the end Yeah, hi, can you hear me? Okay, so So let's talk about what we know what's the problems that people face as they try to run their workloads on a cloud So in the beginning, you know from the starting of opens tag, there's this Discussion of workloads that are cattle or cat workloads that are pets Cattle workload is things that you do, you know, even if a system, you know, breaks down It's not that big an issue. For example, it's a web server behind a load balancer. It's one of 10 web servers It goes down. It's not a big issue and in the beginning people said well We want to run mostly cattle workloads out on the cloud. That is, you know, small websites, etc but as you know as Open stack is maturing and growing people want to run all of their enterprise workloads on the cloud as well And that's what we focused and and then we asked people, you know, what were they big problems that they saw with? You know being able to run their important workloads business critical workloads Usually we got answers around two things trust and performance. That is you're running a multi-tenant system Can I trust that there'll be no nosy neighbors? That is the hypervisor or bios has not been compromised and somebody can see no look at the data The other one is the noisy neighbor. That is you have one VM. That is maybe it's a video streaming application It's streaming and it's streaming frames and it's trashing the cache And as it crashes the cache the performance of the next VM is going to fall So you've got nosy neighbor noisy neighbor. These are the two big classes of problems people wanted to solve so we took that problem up and We came up and we looked at additional problems around that challenges that people had running workloads Example would be In a cloud most clouds do not, you know, public clouds at least do not offer you predictable performance They don't give you any SLA on the compute performance. So that's one problem That is your performance can change as much as 40% depending on which system they Cloud vendor landed your VM so it may be a CPU several generations ago or the most current one So that's an example of unpredictable performance even though you are buying the same size machine Maybe M1 small M1 medium the problem differs the other one is You have limited trust you're it's running somewhere and you don't know whether there are other workloads that can you know Look at your data. So that's the trust problem that I mentioned There's also this gray machine problem that open stack does not handle today that we have you know For example, let's say fans have failed on a server and the server is getting hot The default algorithm of open stack is just going to put the VM on there, right? It doesn't know about those things But you want to make sure that if there's a noisy neighbor or the system is unhealthy You don't schedule more workloads on that system. So those are examples of you know getting into the enterprise class and Of course, the workload Has to be more optimal So if you avoid all these noisy problems all of the performance of all the workloads is much better So you can you know get more work done with the same cluster. So that's those are the problems We came up with an intelligent workload placement Algorithm that looks at the health status prioritizes all the systems and is able to pick the right The you know physical system host system on which your VM should land. So that's so we've Enhanced the Nova scheduler in open stack We've developed a compute metric So the user can specify, you know, how much compute they want and make sure that they get it in that in those service computers is what we call it and We mentioned this monitoring of this noisy neighbor to find out when that condition is happening Cash contingent is happening flag it to the user So you can take remedial action like moving that VM from either the noise maker or the noise, you know the noise affected VM to some other node and Of course boot time attestation you want to make sure that it's in it's an important workload Make sure this workload does not land on our system that has been compromised or has not been attested yet So those are the types of the technologies that we have built into our product It basically has three components. We have an agent on the node That does deep platform telemetry to detect those kind of cash contention type of problems We've got a Nova scheduler plug-in that does the intelligent workload placement and Of course, we have our own KVM virtual appliance that has a rest API So if you want to integrate with your existing IT systems, it fits in just nicely with that So that's basically as simple so deployment is as simple as putting the plug in installing this appliance somewhere and putting agents in the Linux KVM Support so we were basically solving three classes of problems First one is helping the customers figure out what size VM they should get and how we'll be based on the performance requirements and the end we've developed the service compute units by which they can specify that for the VM and it's based on a Formula that looks at the CPU the frequency cash size other things that you would expect from that particular CPU, you know instructions per Cycle so though a clock and so we take all of that and and create a metric that can be uniform across generations of CPUs So that's the first one of course if they're if you are You know if a customer calls in and says hey, my application is slow sometimes How do you go about debugging it especially if the problem happens to be these things like cash contention? So when a customer calls like that problem, you can bring up a say and look at the console And it'll tell you if you are having those problems So we look at cash contention. I mentioned system health other resource metering systems like that and then We mentioned this problem of our am I running on a trusted node? It's a multinational tenant system has my system most been compromised can somebody Steal the data can somebody put some other VM that can look at things all those concerns can be mitigated So we whitelist the systems we check every at boot time to make sure that the systems integrity has not been compromised so Basically summarizing that basically what what Intel service assurance that gives benefits that it we give you is that you can run your workloads with confidence in this kind of open-stack infrastructure using all the features that we've talked about of course We automate the whole process by plugging into the NOAA scheduler and enhancing the NOAA scheduler So it can make those decisions on the fly and you don't need operators to go spend time doing that And it's real-time and dynamic and it's up to date all the time So it's a fully automated and the third one is we have rest apis and also a web console So if you have existing monitoring management systems, you can use our rest apis plug it the data into your own monitoring system You use it so it's easy to Configure and we are working actively. We have Dell systems in our lab We are working with the solution to integrate the SA into the Dells reference architecture and we will move forward up together So from a resources point of view So we can take questions after this but quickly this this deck should be available So you can you can take a look look at all the resources you've got Lots of Dell cloud resources Dell comm slash open stack that comes at red hat red hat comm has a whole bunch of resources on this We have all kinds of YouTube videos out there on the solution Intel SA a resource site So feel free to contact us at open stack at Dell comm for any questions on the Dell solution And at this time we can Yeah, we're live demo, right live demo of the actual solution running there So you can take a look at that so that we will take questions got about five minutes left Any questions for either of us? Okay Yeah kinda What are you looking for and in terms of a solution I mean in general I mean does does this kind of solve some of the problems you're looking at or Like some feedback from you guys too. You don't have questions. We like to learn from you So what comments do you have anything on some of the big challenges and pain points you have around open stack? That you would like to solve or see solved Your part Yeah Yeah What kind of workloads are your customers trying to run? Thousand so it's a scale you're talking about a scale issue. Okay, so that's it's a scale issue Correct. Yes. It's a cattle workload at very large scale, which they can get from Amazon, right? Yeah with internal clouds, I mean most people from what I've seen they're not really running at that scale yet Most of the the solutions are in that pilot kind of configuration I talked about you know three racks, but still pretty fairly large. I mean 2,000 VMs I don't know what your VM count is, you know, tens of thousands maybe But you know for most common workloads within the data center for dev test monlit, you know Some storage is a service kind of workload. We find that the three rack system is a good enough starting point Unless you're running something huge in which case it's a whole different ballgame, right? Yeah other comments. Yes Yeah, yeah a couple of ways of solving it So now Red Hat has a three-year extended support cycle as we do with the hardware So you don't as a customer unless you really want something in the next version of open stack You can stay on the current one for up to three years now What Red Hat will do based on our feedback right if a customer says I want XYZ feature in neutron in Juno Right, and if there are enough customers that ask for that and this is what happens in the rel cycle too Is they take one feature and backboard it Into the previous version so let's say you're running ice house and you want to stay an ice house because it's great It's running you don't want to touch it You've got three-year, you know support cycle from board Dell and Red Hat But you say look Juno has got XYZ features I don't want to upgrade because I don't want to kill my system. It's just running my users are happy There's a possibility where you can bring individual features backboard into into your current system So that's one way the other way is to upgrade obviously an upgrade is an unsolved problem today It's it's kind of being solved, but it's not a solved problem by any means right So over the next couple of releases We hope to have a much more seamless rolling upgrade process even with our solution right the the community is trying to solve it But it's not there yet Yeah, so compute so the biggest issue is in networking right? I mean compute has been partially solved so you can you can kind of do an upgrade of your nova compute nodes Literally, you know pretty much without any downtime But the networking is a tough part because you've got all these things linked up and neutron is a bear It's still not yet there So if you if you and I was just hearing from one of my colleagues once you restart your neutron server Effectively it kind of flushes all your all your traffic flows. They're gone Until you until the whole neutron server is up and running again So you've got downtime with neutron which is still not solved yet Same thing with the data right if you want to move things from one node to another node You need to bring your data along with your VMs and everything should be seamless So that you can do with things like Seth and you know other technologies out there But it needs to be a coordinated thing right if you if you want to fail over thing from one one geographical location to another one That's by no means a solve problem today in OpenStack. It's it's there's a lot of work to be done to do that There are some simple things being solved like nova upgrades neutron will come data backups will come But I'm I think it'll be probably at least a couple more releases before we get there. That's my Couple more release one more year. It'll come in phases right it'll come in phases. So I don't know what you're seeing And Right Oh Yeah, one thing we're seeing interesting workload I mean typically we would think that a system like OpenStack or Amazon is great for web applications Which they are but we're also seeing traditional workloads Oracle databases People soft, you know Microsoft SharePoint those they want to move them to OpenStack But like he said, you know, we need to work on the resiliency because originally the idea is it's a if it doesn't work You you kill it and you kick up, you know start another one But that's not the case with Oracle database you that you can do with an Oracle database VM It needs to be really solid, right? So it's kind of interesting. We're seeing a lot of interest Amongst enterprise users to bring the legacy workloads Initially, I used to say don't even think about it because that's not the right workload for this But it seems like they want to do it For cost reasons, right? Yeah Other questions any final thoughts questions How many of you are actually first time here at the summit? Oh, a lot of the a lot of the people news to the summit. Okay So if you the other thing I would I would kind of Last message is if you are depending on where you are most cities have an OpenStack meetup group It's like a user groups that meet, you know frequently to talk about their experiences I run the the Dell runs Austin and Boston. There are two cities where we run an OpenStack meetup group So if you're in any of those areas, you know, check out the the meetup sites You can come and join us. We have monthly meetups, you know, we get about 60 70 people every month People deploying OpenStack people that are building OpenStack. They come and experience They share their experiences. So lessons learned and things like that But if you're you can check out your own local, you know place where there's one in Paris I think I everywhere they're like 10 or 20 different meetups here in Europe So that's a that's a great place for you to go and check things out. Yeah, but you know, it's it's a growing community lots of interest Certainly, it'll take some time before it's ready for enterprise Oh Yeah, that's the largest one The pilot system oh tens of thousands so that that depends on you know a couple of things one is Is it geographically distributed right if you have multiple regions multiple data centers? That complicates things then you then you are talking about availability zones Even within those if some servers go down, do you want to have a disaster recovery? So these kinds of consultative things, you know take about you know, I'd say two to three weeks There's a workshop that Dell and Red Hat does with their customers. It's about a week long It could be shorter. I mean we can spend three days and just working through and it's a free workshop So if a customer says we we want to do it in three different time zones and five different data centers Then we sit down and do an architecture and we say look This is how this is we take our basic reference architecture and then we take that out It could be anywhere from one month to six months You know, it really depends on the scale at which the customer wants to deploy the pilot system the pre-packaged one is under a week It's all ready to go. You just you get 2,000 VMs. It's deployed up and running You can start putting your workload with under a week Larger ones it all depends on your requirements, right? So your storage and your networking and so on What you so It varies between one one month and three months up to six months in some cases. Yeah I mean we've red hat has done some really large deployments, right? But they kind of start from scratch and say okay what servers and sometimes customers have their own servers sitting somewhere in a Basement and they want to use those right it may not be any of the certified ones that we have so you're kind of Building this as you go, right? It's it's not like a pre-packaged earlier to go system So that's why it takes time if it's if you just buy the way it is It's pretty quick and the user experience is also You know it works out of the box, right? And you put your workload in your users are up and running They just And then they emerge all the time I can tell you that from experience Just with three different systems with an installer with with open stack and with sep Everything every time something changes whether it's the operating system or the open stack packages or the sep version Something breaks That's just the way it is right now and and like I said these are all different moving parts and not all of them are synchronized And that's what we are doing with the engineering We're doing is putting all of them together making sure they integrated making this they're deployed in one piece and supported as one piece Which is a lot of engineering effort and that's kind of what you buy We have a We have a hardware compatibility list of all the servers right now with red hat and we'll do the same thing You will have yes, absolutely Last Okay, I think we're running out of time. Thank you very much for your time if you have any questions feel free to reach out At those email addresses. Thank you