 Welcome everyone. So this is the last session of the day. So I hope you still got the energy to go through our session. So we have three speakers, so let's just introduce ourselves. I'm Jim Golden. You can thank last night for that picture. I'm an IT specialist at NIST, National Institute of Standards and Technology, an open-stack operator, and I run GPUs on our cloud. I'm Justin. I'm a performance engineer at RedStat at Red Hat, and I enjoy running and using accelerators. I'm also a big fan of rockets. Yeah, I'm Howard Huang. I'm a standard engineer from Huawei and the current caretaking PTL for the Cyborg project. This is a picture we took at the Atlanta PTG with other teammates. So this talk will consist of four parts. So I will first introduce a little bit the background, go down the history lane, and then Jim will provide the requirements that gather from the scientific working group on HPC especially, and then Justin will give a deep dive, an introduction of the Cyborg project. So what is current status? What are we developing? What are our specs? And then we'll go over our future works. So I want to start with a quote from James Hamilton, VP engineer of Amazon. So this is a quote actually from a blog post recently he wrote for his famous perspective blog. I think this represents most of the thinking of the Cyborg team is that accelerator is no more just like icing on the cake. It will be a requirement rather than just an option. And as we all know, Amazon like wrote out the instance on FPGA and GPU last year during the revamp, and I think it is a very clear signal to the industry and market that we are ready for a period for the accelerators. So actually we have been having the conversations on accelerators for a considerable long time I think. So the conversation first started actually in the standard organization in Etsy, which is a telco standard organization. So when they draft the standards for NFV, which is network function virtualization, operators that came to the working groups that said we need accelerators to do, for example, offloading or to do the IPsec acceleration. So we form a group that working on a standard document to describe the overall requirement for the accelerations. At start we actually don't know what to put there. We just simply look over at what NOVA could provide at the time. Gradually we found out that there is a general requirement that bigger than what NOVA could offer at the time. So I think that that was part of the original thoughts to have an open source project that doing especially management for the accelerators. Then came the OPMV D-PAC project, which within the D-PAC project we have a more specific requirement that joined for the acceleration management. And then OPMV is an open source integration platform. It does some of the coding, but it basically just do integration. Then it turns out that we need an upstream project for D-PAC that actually do the coding. So that's when we start the project called Nomad. It opens up that we try to develop this accelerator management for NFV for the token requirements. And then I and Michael Polino from Virtual Open Systems held a boss session back in the Austin Summit because we want to be sure whether this is also a requirement from the Open SAC community because we often find that the requirement from the OPMV side does not necessarily match the Open SAC community's requirements. But then we found that the desire and the requirement we found within the Open SAC community is much larger than we thought. And then this like enables the transition to the current cyber project. So during the Barcelona Summit we have a design session to collect thoughts from interested parties. And then we found that we have the requirement for acceleration management not only from NFV but from HBC, from Public Cloud. That's when we decided to truly form an Open SAC new project that will officially address this issue. So we do a polling and voting on the naming and actually Jim came out with a cyborg name and it got a unanimous vote. And some quick facts. So we are a new project, an official project here. And we actually developed the code from zero. So we are now reviewing the specs and after the specs are all freeze we'll start the actual code development. And we follow the four opens principles. So everything, discussion, design, review, everything is in the open. You can trace it on the wiki anywhere you can find. So we have a weekly meeting on Wednesday on the ARC channel. And if you have like any questions you could just send an email with acceleration in the subject to the mailing list. And you can find our meeting minutes there wiki. And if you are interested like which companies are participating just look at the stack list for the stats. And then without further ado I hand it over to Jim. Okay, so scientific working group. I have the aim of the scientific working group up here but essentially it's to promote scientific computing on top of Open SAC. So why would you want to do that? Historically scientific computing has been associated with HPC. In HPC clusters are generally fairly rigid. You have your workload manager, you've got your MPI and your bare metal servers. You're limited to the libraries that are on there and you're kind of limited to the setup. But scientific computing varies quite a bit. You've got everything from big data to data science to AI, deep learning, neural networks, all those type of things. And so running these types of workloads on top of Open SAC gives you a greater amount of flexibility and a management layer to manage these workloads. So these types of research, there's all kinds of research. And so typically as I said high performance computing performance is important. So a lot of times accelerators are used. I may be preaching to the choir here, but accelerators are used to speed up your workloads. Accelerators are typically GPUs, FPGAs, you can do SRI via networking. And those are used generally for specific workloads. And when you're using accelerators like that you need to monitor them and examine the telemetry. And they're not easy to use and they're even more difficult to use in Open SAC. So to give you an example, we use GPUs in our current stack. And so you can use GPUs in Open SAC. It's not easy per se. First you have to set up some BIOS parameters. You'd set some kernel settings. You have to do some white listing, black listing, set up configuration in Nova on your compute and on your controllers. And then once you get that set up, you have to have specific flavors. It's assuming you're doing PCI pass through. You have to set up flavors that connect to those specific GPUs. And then once you get that all working, it may be working, but it may not have the performance. And to get the performance you need, you'll also have to do KVM tuning. That involves CPU pinning, new metapologies, huge memory pages, et cetera. And then once you get that all working, you kind of have two options of how to manage it. You can have a heterogeneous host system where you've got GPU systems and CPUs mixed. This has good and bad parts of it. The good is that all your CPU resources are available. So when you have a full stack, you don't have any wasted CPUs. The bad is that Nova Scheduler doesn't prioritize your GPU workloads. So if you've got a GPU-enabled host that's full of instances, you're going to have to manually go through and potentially move instances off of there to allow your GPU-enabled workloads to run. An alternative is you can set up GPU-only host aggregates. When you do that, your GPU hosts are segregated, but that's also a bad thing because they're segregated. So your CPU workloads can't burst over into those hosts because they're separated. So there's not a great way to handle this, and you end up micromanaging your instances. And when you're micromanaging instances, your cattle become suspiciously pet-like. So what we need is Cyborg as a framework to manage these systems so you can attach and detach accelerators or other special devices to your instances. Similar to how you can detach and attach storage, but not only that, but it's a framework to manage it. So you can attach and detach your instances, but you can also set up the new metapology and the CPU pinning to be all done at the same time. So you have a driver that takes care of everything, and you don't have to write individual scripts that only you use, and you can share those. So that's kind of a teaser to what's possible with Cyborg. I'm going to hand it over to Justin now to explain a little more. Okay, so I'm going to start by going into why I believe that Cyborg is a tool for everyone. And sorry, we're sort of repeating ourselves between ourselves. But acceleration is going to be mandatory into the future. And if we want to derive that sort of from first principles of computing here, as dive shrinks continue to slow down, your single core performance is not increasing very quickly, if at all. Parallelism is increasing, but traditional x86 dynamically scheduled CPUs, they don't make good use of parallelism. That dive space can go to GPU resources that are much more efficient. So essentially, unless you are willing to scale out forever, and that's sort of driven the need to move from scale up to scale out, is lack of hardware performance increase. Things aren't going to get any faster. Customers and your workloads are going to continue to get bigger though. So we're sort of at an impasse where things need acceleration. They need more specialized hardware if they want to continue to advance at the rate we've enjoyed for the past decade. When I made up this slide, the prospect of FPGAs and Intel CPUs was sort of far off. But in between when I made this slide and now, I was announced that we can expect to see FPGAs and its CPUs in the next couple of years. So this isn't going to be a problem of, hey, accelerators are these specialized things for network function virtual, for NFV or scientific computing. It's going to be, I bought this rack of servers, they have a ton of FPGAs in them. How am I going to use that? So we need open stack needs, a solid solution to that problem. And to get into why we need Cyborg specifically, doesn't this belong in Nova? Well, yes, a lot of it does. Specifically, some of the PCI attachment stuff, much of the placement stuff is all Nova Placement API. It's all going to be Nova API additions or modifications. What Cyborg is for is for life cycle management of accelerators to keep you from having to go and find machines that have a GPU. Let's say you have a server and like, well, let's say you have a data center and you have one user who needs to use machine learning applications. Right now you will spend more time babying those machines than you will spend on the entire rest of your cloud. And that's just not feasible. We need it to be possible for you to place an accelerator into any one of your machines and have a management framework handle the complexities of making sure that the accelerator is there, installed, set up, instances go to the right place and everything just works. So, yeah, this gets into... Yeah, animations. Anyways, this gets into... Wow. Anyways, this gets into how we intend to use the Nova Placement API to make sure that things actually end up in the right spot. This is relatively new. It only came out in Ten or Newton and it's getting fleshed out in the next couple of releases. Whereas previously, people who used accelerators, GPUs, whatever in OpenStack had to perform their own scheduling hacks, we now have a more standardized method of getting instances to the right spot and with the addition of reservation APIs and other things of that nature, we can actually get into solving the problem of making sure that your instances are open and you can use your accelerators without having to write really horrible scripts. And in the end, that's sort of the point of Cyborg. There's going to be nothing super revolutionary in this talk. This is organization. People who are using these accelerators for HPC or NFV are already scripting all of these things. They're already doing all of these things, just not in a way where they can be easily contributed upstream and consumed by everybody else. The goal of Cyborg is to centralize that effort so that we can benefit from each other's work. Let's see if I can... Yes, okay. So this gets into the sort of... Well, the real heart of Cyborg is the idea of drivers that we can use in the cloud. You're like, drivers, that sounds kind of scary. Aren't they those see things that people have to understand the kernel to write? And no, that's not what I'm talking about. What I'm talking about is probably what everybody who has set up an accelerator on OpenStack already has, which is an Ansible Playbook or some sort of automation tool to install the actual binary driver, make sure it's on, make sure things are working, the basic things that we as OpenStack administrators have to do in our day-to-day lives. You probably have... Well, if you've administrated a cloud with GPUs, you probably have a decent portion of a Cyborg driver on your hard drive already, sitting there languishing not to be used by the community. So the driving force behind Cyborg is really as a management framework for these drivers, which we will make as easy to write as possible by minimizing the number of functions that you are required to implement and keeping them as abstract as we can. There are unfortunately some complexities of the accelerators you can't really avoid, like programming FPGAs. But anyways, yes, we really want to make a standard. We want it so that we can all get together and have vendors, users, and manufacturers have a single target to get an accelerator working on OpenStack instead of handing us hardware and a binary driver and saying, here, you figure it out. A word of caution about standards. So I figured it was a good idea to include the comic to try and remind us as we design the requirements for a driver that we need to be very careful about what we do when we make our standards lest we only exacerbate the problem. Okay, so here we get into the actual structure of Cyborg, and there are two or three slides of diagrams we're going to go through about how things work and where things are. So the Cyborg API is the API endpoint that you will be interacting with as a user and an administrator. It's really fairly boring. It takes commands and pass them to other components that actually do stuff. Cyborg Conductor, which actually wasn't there when we made this presentation like two days ago. Definitely in design right now. But anyways, Cyborg Conductor essentially exists to aggregate DB requests and make sure that we don't destroy whatever the database is by having too many agents trying to send data to it. We're not too fixed on what we want to select for the database right now, so it's possible we could select one where we wouldn't need the conductor, but it's not a big deal either way. Now, the Cyborg agent actually resides on the compute nodes as opposed to your controllers, and my vision for the Cyborg agent is that it does nothing but run drivers because there's a dizzying possibility of accelerators. What if you have a PCIe encryption accelerator, which I didn't know people were actually using until earlier today? What if you have a GPU, an FPGA? What if the FPGA is on your CPU? What if it's not? What if it's USB? What if you want to send instructions to some remote accelerator that's somewhere else in your data center? There's a dizzying number of possibilities, and we are not out to create the one true model of acceleration. We are out to try and standardize our scripting and make our lives bearable trying to administrate these things. So the majority of the complexity is pushed off to the driver, which has functions such as attach, detach, install, update, etc. And the agent simply gets onto the compute host and takes all of its drivers and runs the detect accelerator function and says, hey, do we have this accelerator here? Yes, this machine has a GPU. I should report that to the agent who's going to send it to the conductor who puts it into the database so that now you know, hey, this compute host is the one that ended up having the GPU in it. That's great. We can talk to Nova and figure out how to schedule jobs there, schedule instances there. We'll go into a little bit more detail about how we communicate with Nova in, I think, two slides from now. So here we have sort of a breakdown of the responsibilities. This is kind of what I just talked about as I went through the other diagram. Yeah, there's not anything I didn't already talk about except for monitoring utilization and usage statistics, which is a problem that we need to solve for scheduling. The driver will need to implement some sort of monitoring function. Otherwise, how do we know when an accelerator is fully utilized? This isn't a problem you have with current hardware pass-through. If you just pass through a PCI device, your utilization is binary. It's either taken or it's not. But what if it's the sort of accelerator that you can distribute easily? At that point, you need to monitor CPU utilization or memory utilization, and it gets slightly more complicated. But so long as a driver provides some sort of even monitoring stat, we don't really have to look too deeply into that. Now, Nova integration is actually the real kicker with Cyborg, and it's where most of the effort will probably end up invested. Because a lot of the responsibilities that Cyborg has need to be performed by Nova. Well, a lot of the responsibilities that Cyborg has from the perspective of the end user. From your perspective, you ask Cyborg for an instance with an accelerator attached, and that means that it has to get Nova to spawn that instance, get Nova to do the attachment, because Cyborg doesn't necessarily know where the instance will end up unless we insert ourselves into the scheduling process somehow, which isn't good, and lots of people had this same problem, hence the placement API that we've been over a couple of times now. But currently, Nova's PCI pass-through is very naive. As we talked about before, you have to whitelist individual devices. You need to bounce Nova services to do this, so you need to go and write to the config file and restart Nova. This isn't very nice, and we'll probably have to extend Nova to allow us to live add devices to this whitelist at the very least. This integration is going to require a good deal of work, unfortunately, but we think that it's feasible and that it represents the right distribution of responsibilities between projects. Cyborg isn't going to eat your cloud. That's not our goal here that takes over all of the services, but this means that Cyborg and Nova will be working together very closely, and it's going to be one of the more difficult parts to develop. Yeah, so actually there are a few more things I wanted to cover here. This is a preliminary design, like Howard already said. We're just wrapping up the specs phase, and we'd really appreciate it if people would come forward and ask us about problems or use cases we might not have considered, because like I said, this is a standardization effort. If this were just us trying to make our current hardware work, we would create a hacky series of scripts, do all of the work ourselves, and then not contribute upstream, hence perpetuate the problem. So we need to try and help support everybody's use case and make something where we can all come together and work on it, and hopefully save ourselves a lot of time and effort and prepare OpenStack for a future where people need accelerators to run their regular daily workloads. With that, I think I'll hand it back off to Howard for future goals. Thank you. So Justin just described and explained the current status of development in Cyborg, what is our current architecture is, and here are a few items that at least we think should be part of our future work. One thing is, well, of course, to finish the co-development after we finally be able to freeze all the specs that impike, and the second is to have more contributors, and I think especially from the vendors that build these accessory reters and to enable us to better understand what are the real use cases. They are building their devices for the customers. Where are the real gap is, and where are the things that we need to pay extra attention to? So the second point is to work with a scientific working group and maybe other related working groups to have the Cyborg-related requirement coming over here and we want to be able to fulfill them. And also looking for POC opportunities so that people can actually test whether Cyborg is working. So we want to avoid that. We just develop something that works well for the unit test, for the smoke test. We want Cyborg to actually function. And so a little bit further in Queen, we think we want the basic support for FPGA and general purpose GPU to be available for Cyborg. And another part, outside of OpenStack community, we have also a couple of items that involves cross-community efforts. The first one is that Kubernetes recently, very recently, I think kicked off the CDI proposal design. And I think Cyborg could be and should be part of the effort since if you look at the available like CRI or CNI designs, this interface will basically be running like in a container. So as long as Cyborg could provide a RESTful API, it should be like a little problem for Cyborg to provide services for Kubernetes. No matter the language we are written in. And the second is OpenCL. I think I've heard a lot from folks that are doing GPU work that OpenCL would be an important aspect that we might look into. But I don't think we have like a counterpart in the FPGA world, right? The common programming language. And the last item will be, again, the DPAC project. So if there's like requirement or needs for an integration, so I think we are happy to provide the Cyborg source code and perform the integration into the OPMV platform. Okay, the last slide, I want to pay tribute or just thanks to the Cyborg meeting participants and contributors that have now been able to make to Boston. So Justin is the author for the agent and conductor spec. I'm responsible for the API and the database. Realman actually is responsible for the Cyborg NOVA interaction spec. So he's kind of like our go-to guy when everything involves NOVA. So he's been a big help for the Cyborg project. So Russell from Lenovo is the author for the generic drivers spec. So we want to have entry generic driver that do the minimal like set of capabilities, a common denominator, and have all the like vendor drivers leave out of the tree. And then harm Suleiman also from Huawei provide the FPGA proposals. Moshe and Eden from Meninox did the Nakasa proposal. Actually, that was the first time like how to utilize the resource provider that School of Thought get introduced to the Cyborg team. And Michelle, I just mentioned Michelle and I kicked off the Nomad buff discussion back in Austin. So he has been a great help on the virtual switch side. So they have prototypes or products on the ARM based and with extra reader this composition of virtual switch implementations. And then Mike Rook from Nokia, I know Mike from Etsy's world. So we have been like for years. So really thanks to all the people that have contributed to the discussion as well as the spec writing for Cyborg to enable us to be a new project from really nothing. So if you like trace the Wiki history, it's actually just from nothing to the place where we are. So thank you very much. I have a few questions before the answer. So we ended a little early and I'm going to pretend like that's intentional. So I have some questions particularly about the audience here. How many of you are, because I want to figure out who's interested in this sort of project and how we can better tune our efforts to them. Raise your hand if you are more in research rather than industry. Okay. And vice versa? Okay. So how many of you would be interested in a couple of accelerators in your cloud and how many need entirely accelerated cloud? Sorry. Couple first. Okay. And how many need an entirely accelerated cloud? How many people aren't even sure if they need accelerators? Noah. Yes, I managed to sell everyone on the idea of accelerators. I'll call this talk a success for that alone. Okay. I had a few more questions in mind. Mostly going along the lines of what sort of technologies are people interested in using acceleration for? No, vice versa. What sort of technologies are people using in acceleration? Of those of you who are using GPUs, how many would you say you're locked into CUDA or locked into some NVIDIA technology? Well, okay. That's probably a bad question to ask because it's like asking... Well, anyways, it's like asking somebody to say bad things about themselves. It's not going to work out. So really we want to try and make this easy for people to use and we're going to need some contributors to help make that happen depending on what use cases people have. So I'm hoping people will come and work upstream and end up on Howard's next acknowledgment slide. And he'll take the time to read out everyone's names, which is great. Let's see. Yeah, and then we have a nice annotated version of the NOVA interaction because it's a lot more complicated than I like to gloss over. If you want to know more details, you can come up and talk to me. Thanks for your time. And if you have questions, please go to the microphone. Oh, yeah. And all the technical questions we'll go to, Justin. So I do have a question. So the types of acceleration that you primarily seem to be looking at seem to be solutions that would be resources that don't tend to touch the rest of, say, OpenStack resources. I work for Netronome and we make hardware acceleration for networking in OpenStack. And so I was wondering if this project will be able to provision resources that also touch, say, the networking. I mean, yes, we can create solutions that are just using simple SRI OV assignment, but as you cited in this talk, and I 100% agree, there are times where you need finer control over allocation of resources that are on the card, say, in our smart nicks, then just you have this device or you don't. And so I'd be very interested to know if how Cyborg is planning on hooking into NOVA will also be able to manipulate, say, the assignment of network devices to the VM. Okay, so this gets into what can you do with drivers and we've had this discussion with several people who are interested in varying use cases. Maybe some of them just don't want to schedule machines on anything that's a NUMA machine because they have something that really just hates NUMA for some reason. So drivers are really just scripts. If you didn't notice that was sort of the point of my whole explanation there is that anything you can do with a script can be a Cyborg driver. So you get this weird middle road where it's possible to do pure tuning tasks as drivers. And whether or not that's a good idea or another hacky pattern is a subject of some to date but it's something that we are happy to support for the time being especially if there is a lot of users who are interested in it because, like I said before, we're trying to get people together when it comes to accelerating their workloads, not necessarily try and state that we know what's best for your cloud. As far as modifying OpenStack services itself, the problem I think you're going to run into there is more along the lines of you need to bounce services. So let's say you want to try and accelerate Neutron somehow and this is across your controllers or some part of the cloud itself and you go and you modify this, you install this software, well now you've got to bounce a bunch of services and sure a driver can do that but when could Cyborg safely run it? It gets into the problem where the user could do something like say, oh yeah, I would love accelerated Neutron, let's turn that on and their production cloud falls over. So we would need some way to mark drivers as, oh no, don't run this while your cloud's running and then have Cyborg create warnings about that. But once again, this feeds into our warning, cautionary tale, be careful about spec creep. The more people have to implement for these drivers, the more concerned we need to be about people not contributing their stuff upstream. The bigger the gap gets between I wrote a quick little script to install all the stuff I need to make my accelerator work and this is a full driver, we want to keep that very small to the point where I would be happy if people had some strange accelerator and they contributed like here's an install script, here's how you get utilization and we could just wrap it up very easily and then merge it versus some constantly feature creeping driver API that we have to maintain and we have to convince people to maintain. So that's why for at least our first release we're focusing on what people are doing now which is PCI pass through and basic device pass through, maybe some stuff like pinning CPU nodes but then we can try and expand once you've got a better idea of what our capabilities are because the Nova placement API is actually changing very frequently right now and there's certain stuff that's not supported yet that may be supported in the future. So we'll be happy to try and accommodate your use case but there's a lot of complexities here that we're trying to hide and I don't want to make everybody have to run headlong into them. Let me ask a simplified version of that because if you're right there could be a whole lot of edge cases and use cases. A simplified version of that would be let's say that somebody said okay I want a hardware accelerated network interface. Okay so that's fine you can assign the PCI pass through sideboard but would Nova realize that that satisfies the network interface connectivity requirement as well? That gets into resource providers and Nova has a resource provider API so whether or not we can step in and say hey Nova you have this base resource that's not a user defined resource and that's a network and that's a NIC and we're creating a resource provider that's also a base resource. You know whether or not it supports that I have no idea but I don't think they'd be opposed to that idea. Yeah so I think basically we'll create a resource class to describe your accelerators and with the traits that actually identify for example this is a network accelerator and back to your first question in a general direction a simple answer would be yes because smart NIC cars will be one of the central use cases for sideboard. Actually I have been working with NABIL from National on SSI for about two or three years on that subject and I also want to add a networking acceleration use cases so I just watched a NetDevConf video recently so there was a talk given by Oracle engineer he mentioned that when you run a relational database workload on the cloud the network throughput is absolutely vital so they have done some tuning one aspect is that they try to use UDP and then in order to accelerate to things they have to offload the UDP checksum on the NIC card so that's where the smart NIC cars come in like from National or KVM so I believe this is actually not just for AFV but this is actually a general use case and very good use case. Thank you. Thank you. I think the answer then is we'll have to try and collaborate with you guys and give you help out. So I had one more question, different question which is how are you contemplating at this point any notion of live migration within this framework? Live migration is a fun subject. There are some devices that you can't live migrate but there are some that you could. So once again that's back to something we'd want as a flag in the driver if we wanted to support it and there was some talk about that earlier today in the Nova Placement API which turned sort of into a placement API meeting. So the problem is that live migration doesn't always work and assuming that it works as a fundamental for Cyborg would be a rather large dependency to add and we're not going to get into that right now but in the ideal world it would be really great if you could fully schedule a machine and with normal compute instances and then somebody comes along with a GP instance and you're just like, we'll just take all these instances and live migrate them somewhere else and put the GPU instances on here. It'd be really nice to be that lackadaisical with people's instances but I don't think that's going to happen. So until live migration works really well, I'm not optimistic. Okay, fair enough. Thank you. Looks like we have another question. That flow chart that you had where, especially I guess the backup one, where you have all the little numbers. Well, the backup one is kind of... I don't know if it's accurate or not but I noticed one of the questions that I had was if I want something, am I coming to Cyborg or am I coming to Nova still? Is Cyborg going to drive Nova and I have to make my request to Cyborg or is it somehow just feeding information to Nova and I still make my request to Nova? Okay, I'll be entirely honest. I pretended like you make your request to Cyborg earlier but I'm not sure about that yet because if you make a request to Nova you would need to have some standardized tag in either the flavor, in your call. We don't want to change Nova command line commands because that would be, I don't know, sort of bad. So you need to give the information to Nova and then Cyborg would have to sit and watch for it. Is that a good idea? Should I watch every instance boot and be like, ooh, does this one have an accelerator? Do I need to insert myself? Yeah, okay. That is another good question. Then we get into hot plugging PCI devices into instances because the instance has started so should we take this instance that has finished spawning shut it down, attach a PCI device and start it back up? Okay. Now, I'm aware that it's supposed to be possible but I have no idea how the difference between possible and you can do it and it really works. So if somebody could tell me that that really worked then I would be happy to sort of go that route. This is where we're missing some research and these are gaps that are going to need to be filled as we implement. So we are currently discussing this in the Cyborg-Nova interaction spec so one of the directions we are looking at is that, well, the placement and resource provider is vital here. So one way you could do this is that through Cyborg you prepare all the accelerators that you need and then the Cyborg conductor will notify the placement API that, well, first of all, he will like populate the Cyborg information to the compute, to the Nova compute manager so that Nova compute manager, the inventory, got Cyborg's information with all the resource classes you just mentioned. And then the resource tracker will like notify the placement API, hey, this is all the resource I got plus the accelerator ones and then when a user calling the Nova API to the standard way of spawning instances and when you like through the placement API then who knows the scheduling decisions will be decided upon the resource plus the accelerator resources and then decisions will be made. So this is just one of the directions that we are looking at but feel free to like review, help review the spec, comment and spec and yeah. Obviously there's a lot of things have been written to talk to the Nova APIs and you don't want all of those things to either not support accelerators or have to be rewritten to talk to Cyborg APIs. Especially as you're saying the future accelerators become more and more of a thing and you don't want everybody, you don't want Cyborg to have to become the bottleneck. And I agree with you wholeheartedly but these things that are already coded to talk to Nova are not going to magically figure out how to add tags either. There's work either way. So I'm sorry to sort of interrupt you but the guys back here have been working all day and we're over on time so you can come up and talk to me personally once we get this all shut down and they can leave and all of that. That sound good? Thank you very much.