 Good morning everyone. Welcome to join this session. It's so great to see you guys here because I know the lunch is is opening now maybe and after you finish this session you can just go to the dining room so let's get started and you will worth to hear our sessions here and Today's session we will talk about how to use open stack violence a new project to manage the Disaggregate resources. This is a joint effort. We have three people My name is Shu Chiang-Hua from 99 cloud and this is Mandry from Intel Yeah, I work with Intel and I work on projects mainly like zone Container related projects like zone and Magnum and I've just started working on ironing. So and next is who be in from My name is who be it. I'm come from Lenovo. I'm working the project with Valence and in the company I'm a team leader for my team focus on Belmander cloud and RSD and the open stack okay, and OpenSeq Valence is a new project which is announced in last summit Barcelona So we have working on several cycles So this is actually it's like a report out how to we can use open-set valence to manage This aggregate resources, which is our Intel language Intel RSD to manage those poor resources Okay, this is today's agenda. First of all, I will talk about the challenges of high-scale data center What's the current data center? Meet the challenge, how can we resolve those by using different technologies and then We will introduce the RSD technology and redfish API aspects how we can use Redfish to manage RSD and how valence encapsulate are redfish API to call interact with RSD and then we will deep dive into the valence project itself introduce you guys the architecture design the common use case and how we can use it in our Day life and resolve the problems we meet when we Like to involve the data center infrastructure Last but not least we will talk about the valence features and the roadmap okay challenges of high-scale data center according to the research by 2020 there will be 50 billion connected devices around the world and We have the technology such as 3g wireless and optical networking those technologies will Further connect all the devices together so the data center they will need to bet more network Connectivities and Abilities to handle those network connections. We will be a hyper-connected world around around the globe globally and The data center will change Change because we will need cloud technologies to help us to process those huge data and the data will help us to do more AI and Help us to intelligence to handle our job. So the large number of compute and Notes have to manage in the data center We cannot use the traditional method to manage the whole data center because the hardware is set Disaggregate is disaggregate and it's not fully proof Or fully poor resources, so we have to use a net another mechanism to handle those hardware I think According to the keynote mark mentioned the whole infrastructure it will totally change to the Composable infrastructure what's composable From the software stack as I as we see in the yesterday keynote mark mentioned the open step project They will serve as a standalone project if people can Grab the project and stack them together according to their requirement and from the hardware perspective. We would like to implement the hardware also a composable infrastructure. There is we there is the Cutting edge technology We will like to introduce you the Intel ISD technology It will help us to compose all the hardware which including the networking Computing storage resources we can compose them according to the our requirement. We can use those telemetry data from the composable infrastructure to improve the data center Efficiency and reduce hardly reduce the cost of the data center and Those those are the one which are not a traditional data center infrastructure can give us so we will let Introduce you what's the RST Yeah, so I'll try to explain like what is the rack scale architecture and then what is that fish API and then I'll briefly talk about like what is the valence project and what is this trying to do with this rack scale architecture? so the rack scale architecture allows you to Manage pool of resources that can be the compute storage and networking So let's say for an example. We have a rack of This resources compute storage and networking and then we have use cases in different data centers and cloud where we need Servers of different Specification for example, let's say we want some servers that have that should have high That more course so that we can run our high performance programs on this and maybe we would want some servers That should have store it more storage Components so that we can run programs like the database Databases so using this rack scale design. We can compose notes on the fly with different specification and with the rack scale architecture we have an advantage of Having support for effective resource usage just because for example, let's say we create a node with some specification and after Some time we don't need that much of resources So we can remove that resource and get it back to the rack and that can be used again with some another servers So we are effectively utilizing this resources using this rack scale architecture today We don't don't talk about individual servers We talk we think about a rack of servers like where many servers are running in data centers and cloud and we can compose This these servers using this rack scale architecture So there are various companies which are trying to Make this architecture Happen on today so there are many companies which are trying to do and These are some of the architecture which today exists, but these are not fully fledged people are working on it So to name them few these are open compute platform EMC HPE synergy Intel rack scale design drive scale. So you can see the details of all this Architectures there is document. So we have provided some of the links and Now we'll talk about the Intel rack scale design So the Intel rack scale design is the same like Trying to enable this rack scale architecture. So we have different Components to manage this resources in Intel RSD. So for example, we have a rack a rack can compose multiple Chases and then chases can contain multiple nodes So to manage these resources, we have different components in RSD So the most important component is the pod manager with which manages a Rack. So for example, we can have different combinations. So a rack a rack can be managed by a pod manager and a rack can be for example, let's say we have three racks one rack can be with OpenStack another rack can be with AWS and the third rack can be with Microsoft Azure So this kind of combination is possible using multiple pod managers We can support multiple cloud operators having multiple pod managers running on individual racks and Then we have rack management RMM that is rack management module. This is per rack a rack can consist many Systems and then we have this PSME module, which is called pooled system management engine Which manages the servers on individual rack So the mode of communication to this kind of hardware is through the Redfish API So now I'll talk like how is this communication possible to this different kind of hardware. So Let's say for example, there are many companies. They have their own hardware So we provide we need some way common way to interact to these hardware. So many companies gather together and they have defined Standard this is called the Redfish API to talk to this kind of hardware. So Redfish as a API specification, which you can use to talk to this pool infrastructure. So you can say a Redfish is a replacement of IPMI tool and You can use this to talk to the individual servers the same way which we use to do using the IPMI and The racks the Redfish specifies the schemas of the request and response that applies to any kind of hardware It does not depend on which kind of hardware it is running at the back So you can use the Redfish API with any kind of vendors So the this is like the Redfish API. So we have different like many resources in the Redfish API So the root one is the Redfish slash V1 using this you can reach to other resources which are like chases system managers and Nodes which are the individual services in our RSD design so you can use this API is to manage them read and write Some data or compose any node using the Redfish API. So this is an example Redfish API Which you can use to compose a node from a rack. So For example like this nodes is a resource which manages the node that is the actual physical server that has some CPUs memory and RAM attached to it after a node is composed You can use the additional API is to let's say par on par of those nodes using Arrestful API So there are many companies involved in defining This standard. So these are some of the companies like there are total 22 companies involved which have Written down this standard. You can go and look at to this to their site that is called the DMTF Where they have provided the specification and standards for this Redfish API and now I would like to talk about the project balance So balance is an open stack project like which is trying to enable this rack scale Architecture on open stack. So today it supports the Redfish API to talk to this hardware You can say like balance is is managing the life cycle of disaggregated systems So you can use valence to compose node on the fly for example, let's say You want a node with some specification like for example 10 cores Some mix and then 10 gigs of RAM and some hard does so you can use the Nova use the valence API and then have a flavor in your Valence and then Valence can talk to the Redfish Use the Redfish API talk to the pod manager that is running on the hardware and it will compose a node with your Specification after the node is composed. There are many APIs and Valence which you can use to do other kind of actions like powering on power off and maybe attach some additional hard disk or delete some Remove some cores from it. So there are various API is defined in valence itself. That's that you can use to do some actions on your compose node even so we have like in Valence we support multiple pod managers So by multiple pod managers, we mean that you can have multiple racks and for example you can have a rack that is dedicated to open stack and Another rack can be dedicated to some other cloud operators. For example aws Azure So it is possible to abstract the cloud providers using this multiple pod manager concept So you just have to specify where your pod manager is running and the credentials to talk to the pod manager and then Valence can manage different kind of cloud operators in On any kind of infrastructure as well So and the next one is The valence flavors so valence flavors by valence flavor. It is the same as the Nova flavors We specify the specification for any node like how many cores or The storage the next we want for our node valence We you can provide this valence the flavor in the compose node API and valence will fetch the information and this provide this information to the redfish API and Redfish API will talk to the pod manager and that's how you get a node composed with your given specification and The last feature is The integration with ironic so today ironic integrates with Ironic has this redfish API some redfish support earlier. It has support for IPMI to talk to the hardware so today it also supports the redfish API which you can use to talk to the hardware and Then what our integration with valence valence with ironic means for example? I can give you an example like after a node is composed in Valence so you would want to boot up any OS on that So you can either use ironic or any other kind of provisioning tool to install your OS on it So after a node is composed in valence you register it to ironic and then maybe use the ironic or Nova API to install any OS on this composed node and after the node is Provision you can use it for your running your any kind of services on it So this is the architecture for valence so in valence we have a API service and Controllers for managing different kind of resources. We have controllers for Resources like systems nodes flavors Storages and managers so you can use the valence API So for example, if you say like compose a node for me So the API is called the v1 slash nodes and slash compose so valence will Make an redfish API call to the pod manager that is running on the racks on the hardware actual hardware So let's say for example, this hardware is the inter rack scale hardware So it will have the pod manager running so you have to specify the Pod manager credentials in valence that is possible by the pod manager So the valence picks up the pod manager details. It creates a connection it it creates a connection to that and Finally, the node is composed on the hardware So this is how the interaction goes like from valence We make an redfish API call and the redfish API call will talk to the pod manager that is running on the hardware And then we get a node composed which is running on the rack So this is just what I have explained now like what are the steps to like run an OS on any Hardware which was like initially we have a pool of hardware What we can do to run any OS on it and how can we use valence to do that? So first we compose a node from valence valence will talk to The pod manager after the node is composed we can enroll it to any kind of Provisioning tools for today it is ironic So we provision it to ironic and then ironic will set up the boot device and then it power on So the provisioning will be done by ironic through the PXC and TFTP server after that We again remove the boot device and then power on the system. So finally we have the node Insta Operating system installed on that node from a pool of this aggregated hardware and finally we also provide API other API is like for the crude operation of nodes like you can create and then delete and maybe update some state of The nodes also using the valence API's So now I'll just try to show you a demo. There is a short demo where I'll compose a node using the valence API Registered to ironic and then I'll show you how we can like power on power off those nodes using the ironic CLI So iron it is possible because ironic today supports the Redfish driver For managing these hardware's Yeah, so you can see that there was no nodes this is the API for that is managing nodes in valence and To create a node in valence you have to provide the Flavor like the specification for your node, but it has some default configuration which it uses And I have not provided the specification here because it was the flavor was not supported at that time So you can provide create a flavor before creating your node and provide that flavor here in your Nodes compose call So after this we have a node composed you can see the details of that node like we now have one node created here You can read this bit details of those node Can you pass it here? Yeah, so if you see the details of the nodes you can see like the For example the MAC address and if you scroll up you can see all them how many cores are attached to it and The nicks attached to it and then how many RAM or drives it have this node have So there is a way which where you can provide this kind of a specification for your nodes as well So now we see that we have only three Nodes in ironic will go and just we'll go ahead and register this Created node in ironic and then we'll try to power on power off using the ironic CLI So now you can see that we have four nodes after we have registered our compose node And now we'll try to power on that node using the ironic CLI. So that is like ironic set par node Yeah, so you can see we have provided all the details for our pod manager This is the redfish API details which we need to talk to the redfish API to do the actual Operations on the hardware Yes, so now you can see that After we have set the node par node to on we have our like we were able to power on the node Remotely using the ironic API. So we are not using the IPMI tool here Whereas we are using the redfish API to control our nodes The hardware that are running on a rack of Of desegregated hardware So now who bin will explain you the use cases of RSD and valence Okay, this part I will show you about the user case of RSD and the violence about Erikson has HDS 8000 and the Dell EMC DSS 9000 and I will focus to Introduce about the novel X Clarity for the manager Before I introduce the use case I will to show the relations between the violence for the managers and the data centers as you know that RCD is about it We are covers on the rack level hardware. So it will depends it will happen in your data centers and Depends on your data center size you need to Spleed their centers in your different portal managers and in any other case you have to do this because your data centers We have different networks. Maybe you have may integrated with them with different use scopes over there And even you need to make your data center support the different vendor of portal manager So you need to sleep split your data sensor with different portal managers and after you have several portal managers you could We just the portal manager into violence one by one and after you've done that Well, so we are profiled the monitoring of each portal managers and it can tell which is each portal managers users status and the user age summary so Valence could tell what happened in the portal managers so after about talking the The structure of that I can show the productions or project first one the Erikson HDS 8000 from the picture you can show that the it based on the hyper scale data center systems and Above the help scale centers It can generate some portal managers then it has a cloud managers It have integrated them then after integrate them She also it also provide your VMs It seems that it has already connect software one software management and hardware management So this is a good implementation about this and another he had it has provided a command center Then it can provide to other 30 part in integrators so the traditional network management systems can Integrate with it and also if you want to use some management physical composable for provisions and Applications you can use this So erection is a good implementation Another let's talk about the Dell EMC DSS 9000 From the picture you can you can see that it has already covered a different workload requirement from law medium and the high workload requirement Then you can open the management here it opened the menu and we are ready to finish and the RSD From the picture it seemed that it have already make the hardware to be poor resource to the users. So it is a Production and it's a project implementer and from the the from this part and also as the last production about the about the what About the erection it all show that it have provide the maximum flexibility about the hardware was poor resource Okay, last I will show you the Lenovo X clarity for the managers and from the picture you can see a process stream For example, it is about a composing node and from the user User is always is the you come into initial later. It is in the US. I want some nodes to balance Well, so we have so many port manager So he need to schedule out a port manager to response the request so At least I will choose the port manager in a port manager. You can see It has racks and you know racks you can see that not only he has the RSD common design about a computer systems and the chase is it has defined the GPU chase is This is the different and this is the Lenovo stream for this so the Lenovo port manager can provide a composed node with NVM E and we call it NVM E over fabric and It have the GPU. So this composed node is more than a traditional component Then after you compose a logical node, then you can use development tools like a triple You can make it to be a surface node Then you can make the surface node delivered to a to the end users After you deploy to the end users from the end user side with what he used is a fit is a physical server node Okay, so this is the whole workload of the port manager and the novice doing So this is what the novice doing After talking about the user case, I will show you about the road map and features about the violence first Once we are full support for integration with provisioning tools Ironic is the first one and then maybe we could find out others The second we will show support a muddy cloud support this time is not about a Hardware is about the software management. We have doing the hardware management But I want to do the integrated to any different cloud So it is to be a connected the last one. I will show And the the third one it will support enhanced telemetry that we are we did means a great Great user experience to any users is can improve the convenient on it and The last is in a summary What a violence do it did is to fix the gap and to build a connected between hardware and the software so if you after you install violence into your So if there were management or into your hardware management, you can improve both So this is violence play Okay Yeah, the topics showing its finish any questions about this Okay How the hardware upgrade would work in a rack scale design are similar and how would the project balance detected? There are any flavors already using that particular note that get Upgraded is it like a plug-and-play style or anything else that needs to be done? Okay If I go to your means I could show you the about this part in the port of manners it will have the chases and have a Management module on each hardware from the rack it have a RM management and from Chases it will have a Chases management on this and if in the GPU Chases it will have a module manager module like how to say Your storage you will have a red control like that. So every hardware it has a Management module so violence just need to connect with the management module So he can compose from to from a computer system and connect to a storage and connect to a GPU They will hold into the to our so by the connect By the network it can connect with a different component Finally, it can give you a composed note Yes, I can give you another example for example by Calling the ISD 2.1 API we can compose the MME storage with a physical node dynamically by using the PCI switch, so they control the switch and Hop up the MME to By to the node you want to compose Any other questions? Okay Like you mentioned when hardware vendor want to integrate with Valence, they have to implement RedFist API What if they don't want to or they can't implement RedFist API. Is there any other way they can integrate in Valence? Yeah, they for example some hardware vendor they may have implement their own version But I see if you are like let me know about some when that they can use the pod manager Some existing pod manager solution Intel. I know Intel have reference design and some when they they have the They have commercial version you can Hardware vendor can can just take advantage of that In addition Intel has to make a reference and every vendors could make extensions on that based on the reference So in that case the plugin will live in the Valence upstream or yeah No, just now it will have an integrator with OpenStack like Aronic, but it's not implanted Any future plan like community will accept those kind of hardware vendor specific plugin in upstream or? Actually the RedFist spec is not when the spec Specific standard is industry standard. So Aronic it can use the Suzie or Python RedFist live talk with the the RSD resources through the RedFist API. Thanks. What Valence have done in the Aronic community we we work with the Aronic community to implement the Suzie and Python RedFist driver and reveal and commit in the Aronic community the code is not not the driver is not in the reliance repository. It's based on the Aronic repository Okay, because in my case like we have two options either to implement the RedFist API or To put the some plug-in or driver on Valence side So yeah, so in that case Thanks Hi, I think to your question about plug-in play Look this side and talk that side type of stuff It's all about The life cycles right of your storage your compute etc. Now they can be like at different times Suddenly, let's say there's a new storage available. You can migrate the data and then plug in your new storage solution like something Cheaper denser maybe faster and you can still leverage it So that's one of the whole points of having this disaggregation And to your question about extensions like they said we have a RedFist Standard and so as long as you do things like power on power off pause etc. Using that standard everything is fine But once you compose the node the compose node can carry metadata with it and it could be like Some special company like foo foo kind of compose node and and you know this foo foo has support for some extended calls so you could have a Driver that's specific to foo foo type of know so it would build upon the RedFist API So that's a possibility depending on what vendor support you have. Yeah Exactly and and I think something that's really important is when we first started with this project There was a lot of requests from our sales team to start thinking about it as a green field Application that is here you had racks and I want you to be able to compose nodes and clouds out of it So when we thought of it from that approach the cloud could have been a Kubernetes cloud or an OpenStack cloud and Some aspects were going to be common to all of them like the composition of the node the administration of the node Seeing how much capacity is still available in your racks So that's how the valence project was born But a more lightweight way is a brownfield application where you already have a cloud and along with that cloud you point it to You know your racks and the racks have a pod manager So if you did that then you're just basically adding and deleting nodes from the cloud and that's the way the ironic Depends if it's brownfield take that approach It's like the weight and then if you're just adding it as a bare metal node that works Otherwise you want to create a bare metal node and then deploy a compute host like a hypervisor on it Registered with Nova that works too Thank you, thank you so Anything about valence or anything about SD or even anything about anything you can ask Okay So, okay, let's finish this. Yeah, thank you everyone. Thank you