 Okay, I guess we can get started since people have decided either to come in or out Well, thanks. Thanks a lot for staying this late, you know There was already one presentation after lunch and you know, I'm the second one on the road So please hang tight. Also, I came from Europe So I was wondering if it would have been better to keep the morning slot for the European people So I don't fall asleep myself here But yeah, my topic today is Enhancing hypervisor and cloud solutions using embedded Linux and really what's behind it I'm covering a lot of the same topics as we have been talking today But maybe the bit of a different angle to it because I come from the embedded background So I'm trying to reflect on these same same issues that have been discussed today like going over the cloud components and different solutions and problems in the areas, but I try to give give a bit more of a kind of traditional Embedded type background on it So this is what I cover. It's a long long dump Actually, we haven't had a longer dump of the whole abstract here But really what I'm gonna talk about is the first chapters about the different components that we've chosen for our solution architecture for our cloud solution and Cover the main highlights of those then I go into like the problems we see in this field It's really really a lot about the virtualization and hypervisor capabilities But also a lot around the concepts that have been also discussed today earlier So what are they? What are the kind of market needs and how can we solve those? So we will cover different different areas going a bit deeper Maybe maybe than the earlier ones like logging and tracing of different full stack solutions and so on So this agenda looks like in detail First I covered the solution stack. We have a couple of main architecture slides then talking about QM and KVM Gladly, I don't really have that much on the actual hypervisor solution QM and KVM because that's being covered today You know, I have to agree with their corner going going last after everybody else kind of had their at their talk already it gives me a kind of Different sense to what I'm gonna talk about here So I'm gonna reflect a bit what the earlier presenters have said and you know Maybe skip some of the things that have been kind of already beaten beaten down a couple of times today We have a couple of real kind of what I would I like to think hypervisor solutions KVM and Linux containers Then we have a couple of other key aspects what we see in our space that are being required by by our customers And finally the two chapters very very far where we will end with further work on the topic like how do we see this All going forward and how could we collaborate to make this all better? So choosing the right architecture You have to choose the right components. Otherwise you end up with the solution stack that is not always optimal so this guy's For real it's actually coming to our Friday night out with the guys Just this picture But yeah, it's it's quite important. So our our hypervisor solution stack looks like this So we have chosen a lot of the same same components that Are becoming kind of de facto In our fields Both on the kind of enterprise and also when we talk about embedded cloud Maybe a couple of works many words about that that concept, you know In in my view more embedded cloud is something that is not necessarily something that runs on Of the self hardware in data centers or things that you can run on top of your desktop It's a bit something that is more embedded embedded in that nature that it's Customized for a particular purpose It's not always public. It can be hybrid or a closed cloud for a particular Let's say customer ecosystem and some of the things I'm going to talk about here are Are maybe on a bit high level So without going into the exact details how we implemented certain things Because there are so many ways of doing this But we will see we will see some of the things we will we will dig a bit deeper and something we will keep on a bit higher level So really we have we have two different Ideas for running a hypervisor. One is the kvm. Everybody knows about kvm. It's a it's a standard full full-fledged hypervisor another one with what we still promote and montavista has been one of the kind of Early adapters and we have used linux containers LXC based systems for a long time So we still feel it has a place in in this field as well LXC provides some advantages that kvm is not Not providing in that that nature and in the following slides we will go into that a bit deeper Then we have Additional components inside the os layer and base services box there Open vswitch and openflow Two things related to the kind of sdn concept also related quite much about the cloud So we will cover those in in the following slides as well We feel that these components are also key for creating any type of cloud based on open source components And also in this kind of embedded nature clouds that we are building in these slides Carrier create services also going from our background in montavista We feel that the base system must contain some of these as well And also it needs to be highly available Could have good tracing facilities good logging facilities be secure and also related. This is the cgl aspect Carrier create linux. So typically when our software gets deployed It's it needs to kind of adhere to this specification Basically Kind of mandating all the all the previously mentioned aspects of of a base operating system Then inside the hypervisors we run the guest containers and guest vms In some cases you run passive open stack inside of em some cases you run run it on top of the host So the solution stack here is not Exactly going to be meant like you have to build it exactly like this But it's more like conceptual which components we can kind of mix and match and build solutions Then the box on the far right management and provisioning software and services What I see this is earlier presentations. We have talked about A lot of the a lot of their projects and all the components that sit above open stack So even higher And what I talk about this box here today in this presentation is how can we combine all of these together with the The stuff that is coming from the from the bottom from the base operating system and all of these layers So we can create a solution for the customer that Is a best fit for their particular purpose might not be Might might not be the same for every customer in all cases So we can combine this base solution and sort of Based product as services to form a kind of one size fits all if when you go up to a higher higher level Sorry about that Because I have to kind of move the mouse here a bit Okay, starting with QM KVM Um, I will not be the dead horse too much about this. It's uh, it's been discussed today. So it's a hypervisor integrated into the Linux kernel Um, what maybe has not been Said that much today may be reflected in some of the earlier presentations Is that based on our data, it's like about 80 85 percent of companies Starting or using virtualization nowadays Are either aiming to use KVM or are using it as their primary or secondary virtualization hypervisor hypervisor provider so it's uh It's it's going to be quite important going forward Highlights about KVM what it provides um Key key aspects it's uh on x86 its best reported currently in Linux KVM patches for arm Are being developed and it's it's more or less available I think in mainline you have a good support for arm in some are some sub architectures nowadays What it's really going to be? I think later part of this year when arm will really kick off with KVM QMU Is kind of a component of KVM when I talk about QM you and KVM. It's kind of the same thing So KVM often means like the combined uh hypervisor But I have what is uh kind of the sum of its parts QMU and KVM Some parts uh some of the highlights here are actually coming from the QMU part and some of them from the KVM side But really what QMU for example provides is the live migration capabilities So in the latest releases of QMU you have very good support for bringing a bringing a Virtual machine to another host and you know kind of doing it on on the ply what you talk about live migration The second so the hypervisor solution, which is not really a hypervisor Is linux containers What this is is actually based on the kernel namespaces That junk orbit also referred to yesterday that have now added these user namespaces Providing holes to security. We'll come back to security in the latter slides But really what it does it constrains A particular container so that this kind of invisible from the other containers Whereas it's still running inside the same system still the same kernel It doesn't replicate any of that stuff and you can also to some degree You can also share other parts of the system Between your containers So it's it's much more efficient And how we see this being used in some cases is that Either you can have sort of plug-in components that extend your cloud hypervisor layer In some degree and sometimes What the market needs also is this kind of lightweight hypervisor that actually will Will create instead of gets VMs will create containers That have much better performance. They basically run on native performance if you round out The performance numbers a bit Then open vswitch the third key component in the solution It's an open open virtual switching component What it of course does is, you know, it switches traffic back and forth to virtual machines without using the linux tag What's really interesting about this that In my mind, for example, it provides much better facilities for for live migration When you're using using a cloud hypervisor plus guest solution with open vswitch You're able to use the open vswitch kind of abstraction to abstract out your network configuration So when you move that guest virtual machine to another host You can more easily kind of match the network configuration of that test guest VM If you do it like Mapping mapping sort of network in the phases to the actual host side api going going to go down further down level You will have to kind of Kind of jump through a couple of hoops before you're able to actually move the virtual machine over Because the environment needs to match exactly what you have on the other side sort of Also interesting about open vswitch. It supports the open flow standard and protocol And we see this this is now an important part of the sdn movement So being able to control your switch using open flow and maybe flowing that control downwards the actual physical layer Is an advantage of of this open vswitch it also provides To some degree performance in some of the studies we've seen open vswitch Surpasses in some cases the the linux bridge performance But as in the previous presentation at least it reflects same kind of capabilities So you don't at least lose performance by using open vswitch And it also has bandwidth control So you can control the ports bandwidth That the ports that you have Allocated to different virtual machines. You can control their bandwidth, you know, to create results and quality of service scenarios open flow then In some cases I see open open flow being equal to software defined networking But really the the behind the idea behind is that you have a controller and then you have a target So the controller is sort of a centralized place Where you manage your sort of flow tables and everything the configuration And then you use open flow apis To stream that control down to the open flow targets Either those are real physical switches or they are software components implementing the open flow api This is something that for us in montevista is interesting besides also the cloud movement So coming from the embedded background open flow could be considered only for for us to be a provider of an open flow api in any case So that you would have an open flow compatible device If you think about the legacy traditional embedded device But in in the cloud, um, it's it's really a Really a key thing that is hot at the moment to be able to scale up your services so the traditional Static configuration of fruits and flows as we see in the previous presentations with NFV Alongside with sdm That's going to be the future to allow us to scale our networking infrastructure to support the new A new amounts of data Also key advantage of the of the open flow movement now being standardized is that is actually driven by the end users Like google and facebook and not the not the vendors of Of hardware and software So this is what I what I see for example in the automotive side the cheney v construction Was uh in my mind parts is successful because that the oems were also part of the standardization so the people who are kind of Bringing in the money who get to call the shots also. So that's the That's a kind of aspect. There was access of such a consortium Then open stack, uh, I think I saw this picture a couple of times today already So what open stack is it's uh, uh, uh, I I always use the word cloud framework But it's maybe cloud always or some other term could be used for it. It's uh, When you look from our perspective from the embedded side open stack is sitting far up up up there And it provides apis down down that we need to take take use in our devices and our Solutions to be able to create a cloud solution So what what it really is It's a set of apis That was just described. Uh, so uh, what I see key key points for integration are the three major components Uh, then the nova compute plugin basically needs to be implemented by By any any cloud solution taking use of open stack Then you have quantum networking that plays well together with uh with the open v-switch And sort of networking facilities inside your solution Then you have keystone identity plugin that I I find interesting Maybe because my background is security But you can implement interesting things using this kind of authentication mechanism When you think about, you know, coming from the hardware app You have tpm's all kind of interesting devices that you can take into use in a sort of more custom cloud Provider scenario And the keystone apis can be can be kind of routed down To your system when you then you can take take use of these more hardware related devices Then they'll have other plugins Maybe I don't go into those that much. Those were just described basically around it around the storage Storage handling and provisioning of images One that is really interesting now is also they have a plugin called savanna device a Hadoop service so Kind of when you when you look into this this field for the past year, it's like a year ago I think it was just in the presentation like How far has open stack gone in a year? So if you take a year back Then you would talk about sdn and and big data and and cloud and all these hypervisor things especially in relation to the embedded field So this is this has really been moving quickly And now when we when you see open stack kind of being the middle point also for this You could you could integrate your big data and big data apis using how do you also do the same framework? It's uh, it's really advancing rapidly And what let's provide it's uh, it's really a key service in our solution stack today That's open stack But such a component like cloud stack or any of the others available there is really In my mind, it's required to create this kind of maintain a A cloud framework that you can actually maintain and use in the field if you have to rely on a lot of different components You kind of come up to a solution that the guy was riding his bike earlier here It's there's so many components. I've seen today in this like four or five presentations If you start building so one one block does this thing and one was does this thing It's uh, it's not going to work in the long run. You need something that is more kind of higher and higher and Ability to have to have manageability inside the system Okay, so that was the the first part about the key components and architecture And the idea was more or less to describe what what are the components we have chosen today? And what are what what's their place in our kind of you know embedded cloud solution? And this part here aims to describe What do we see kind of back kind of highlighting the problem areas? Mostly problem. Maybe a couple of solutions as well But we will go through the key areas that I think we need to solve Enable to have this kind of an embedded cloud that we can we can offer realistically And what are we doing? What have we been doing? What are we doing regarding those problems? This is also related to hypervisors that I often like to use Like when you add levels of abstraction, it usually doesn't make your system go faster This is that the guys here have have created that kind of cloud solution So, uh, how do we get over those especially related to the real-time that picture is nice because you know When you have a guest operating system, you typically don't at least have better real-time than you know running your real-time related apps in the in the hosting hypervisor This is one of the Key things to us, you know The montavista background is always you when you think about montavista, it's real-time first and then we we we go to the customers also And you know, but it's true. We have here also these kind of demands. So real-time inside the guests That's that's an important factor in some some cases We will see a bit more a bit more about that in the coming slides. What exactly that does mean Then the guest network through but for sure. That's uh, that's one of the Probably the one thing if you have to choose one thing what you need to have in your hypervisor That's probably throughput Inside your guests using different means. We will come come back to that as well A memory access speed Things that affect everything Memory access speed is is something that is Hardly to hard to put A single thing What what it means and what you need to do about it? What's really coming from our background again Memory access speed in today's hardware, whether it's a uniform memory access or new mouse system You have a bit different means of kind of going around that In the guest operating system typically it relates to creating some sort of affinity With the memory data locality with the memory and its accessors So uh in a new mouse system, um, it was what what John Corbett again described yesterday You have different mechanisms. So you have to keep uh, even the schedule again to some some degree do that for you But again kind of embedded focus kind of targeted application. You can do more than you can do in a general sense so when you when you go for The scenario where you know what the hardware is how it's being used You will be able to create scenarios using a you know, Locality of access using the affinity mechanisms in the kernel using custom handling of you know, the memory Memory locality functionality inside you'll be able to enhance the memory access speed to some degree and uh When when your memory accessors are faster, you will usually get the performance improvement in almost any sense inside the system Especially this kind of uh data intensive applications then to benefit from this enhancement going further couple of other use cases Multi operating system support and uh by multi operating system support here. I I mean, you know Not not only windows windows 7 windows 8 windows xp. I mean things that are Coming from the legacy of our customers things that are you know cryptic and strange Real-time operating systems linux windows, of course versions of linux that were ancient Because you know, you can't throw the legacy away usually and it's it's much of a heterogeneous environment where we move in security in isolation Also in the hypervisor inside the guests and inside the whole cloud thing That's good Good points in the previous presentation. I will also touch some of the same things Maybe a bit different aspects, but uh security in isolation will be More and more important now that this cloud movement is taking up You have different things that you have to address regarding security than maybe in a traditional operating system And forward portability that's uh Also in in addition the multi always support the forward portability aspect Is uh, maybe it has been touched a bit But uh, I feel that uh in in some Some um verticals you have a bit different problems regarding this So being able to assert and that you you can move across different types of hardware different architectures Also in the application space and also when you talk about your guest mechanisms for bypassing network traffic, for example You need to be able just not to use one interface and then when you go to the next generation hardware That might be something that you go from MIPS to ARM or x86 to ARM or something else You can rewrite the entire software to take advantage of you know inside the guest use that hardware in particular You would like to have an abstraction api here And final part in this What we feel custom hardware adaptation is something in the in the embedded cloud Again, we will be able to take advantage of of the legacy from the embedded field To make a cloud solution work better in general It's again, it's hard to put that exact thing around this But being able to adapt use the use the real-time footprint and so on what the embedded Linux Comes kind of brings to the field. You will be able to create a more competitive solution In certain scenarios at least And full-stack management interfaces again, I feel you know My my opinion has not changed even today we discussed in several of the presentations. We we brought out new things to me I learned new things today But my opinion still hasn't changed I think this full-stack management interfaces and I actually could draw the box to be in a bit higher there That's one of the key problem areas. I think because there's so many solutions and it's the thing that sits on top of everything else So in order to make your cloud Solution competitive you need to be to solve all of these issues And we feel that also the embedded Embedded Linux movement can also help there We can you can use our expertise from the bottom up to look at these things You know guys who work with it IT houses, you know and the price solution can look at we get from above What the what's really the customers need like a big government? Organization so on But maybe we can help from god taking a look from from bottom up Okay, so the next next slides are going into Some of these most of these in a bit more detail covering some cases So one example, you know, they're going a bit deeper into the real-time Scenario the problem here is so, you know, typical real-time is that you have an interrupt That you have to react to in in a certain latency And now often these latencies are pretty Pretty small Depending on devices depending on applications you have latencies of various degrees The problem with the hypervisor I think the hypervisor in between of course comes from there that the hypervisor usually has to act first first When it propagates the interrupt inside the guest In order to serve an interrupt in the guest user space you have to go through the host Kernel space at least and then the guests kernel space in order to be able to schedule a threads that actually serves the interrupt So you have you don't only have the worst case response in the in the hypervisor host You have also it in the guest and putting those two worst cases together. You have a really really bad case so in some cases we've seen that This will be, you know, totally off the chart You can do numerous things one simple thing is that you can bind your Bind your virtual CPUs to particular cpu only so that you run one guest per cpu, which helps in many cases simple solution What we really Are doing or have done is that we use our in our cloud solution We integrate the preempt RTPatches Also quite a quite a simple solution when you say it like that, but The preempt RTPatch set is quite big But it provides very good performance inside inside the hypervisor and combining that together with some of the Advances we're going to talk about a bit later and together with the handling of the IPI interrupts that we're improving And now with the upcoming work in the kernel where you remove the timer ticks kind of away from the course if you have to Use affinity or binding. This will give us much better real-time response Totally in a different scale By the way on this we actually have run some benchmarks that show that These these do work, of course again in the embedded nature It's one thing is not you can't really compare apples to apples or oranges to mandarins even every case is so different Network throughput at the base problem area regarding network throughput is that You know when you start doing networking inside your guest you can use the same mechanisms It looks the same your application access as the guest Guest kernel mechanisms for putting stuff inside the virtual interface Which then kind of appears inside the host virtual interface or host backend and this gets mapped sent over to the an actual network interface card So what we really need to do here is we eliminate context transitions Kind of a high-level solution overview We eliminate this we put push traffic directly to the kernel and often as we will come to later You will you will push traffic directly to the actual device from the guests Like using solutions like netmap dpdk Exposing these inside the guests you you basically have to do this for the user plane traffic to be able to actually Manage with your your needs and requirements What we're also enhancing is quality of service capabilities Inside also the the lower layer of the system We can use open v-switch and the higher level systems to some degree But we also need to enhance this underneath so that you will be able to attack that kind of Hard-level Quay package and create quality of service Underneath the guest kernel And then digging a bit deeper on the multi OS support So really what we see here is that Many of the of in of the customers or players in our vertical see that there's Significant legacy Multiple operating system can usually be achieved by somebody key of parallelization Typically these operating systems that are proprietary to these players or Virtual virtualized operating systems like vx works are always see are not trivial to manage Such an embedded cloud needs to support these operating systems. So we often need to do customization Maybe for the hypervisor, especially in the site the guests to be able to get those to run nicely with our solution And the issues we face here is that you know, the IO performance is one thing, of course those Operating systems are used to run on bare metal. They have very good access to the actual hardware But the good thing regarding that is that usually when you do this, it's the use cases to consolidate your Legacy operating system or a new hardware. So they think that the new hardware can perform much better. So it's It's usually kind of a non-issue The real issue is real time because some of those operating systems are used to Like nanosecond or one microsecond response times because they run on exactly the same amount of time Because they run on exactly the bare metal. There's no real OS. It's like a main loop So some of those operating systems have been designed so that they will fail Dramatically if you will kind of go over certain Certain boundaries in your response time. So that's the real-time issue comes to play here that we covered earlier and security and isolation like I mentioned earlier We will face new new kind of threats and new attacks in the in the cloud domain And this multi-tenancy that was covered earlier you run multiple companies inside the same same hosting hypervisor And let's say that one of those companies is you know Amazon billing, you know huge company and other ones like A garage shop on jack and the dog or something And these guys they run out of money and they find a vulnerability inside Their guesting guest virtualization or something they will be able to breach the hypervisor And run run the route on your hypervisor or have other privileged access If you can access the data or use a denial of service for these it's it's gonna be Gonna have drastic results so Actually, I don't really have visibility from these guys like Amazon how they're actually dealing with this but you think that When you go to a from a physically separated host you only put them into a kind of cloud solution You have to have pretty nice guarantees for security Because these kind of issues will will come up there So how we how we go over those we need to make sure that our hosting operating system Of course is you know managed with the with the cds public vulnerabilities We need to run secure processes Have a when when the when the End user is launching their device They need to start to start from the secure approach from the start You know create security use cases have use cases and that's things in order to address these Um, I only have s c linux s word as one one bullet there That was nicely covered in the in the previous talks, but I also like this I have been talking about s c linux a lot earlier um in in these kind of conferences and It has been one thing from our perspective at least in the embedded fields that never kind of it's been always kind of floating in there But it never took off in a large way It's probably being used more in the kind of it it world But we feel that there are certain cases a couple of years ago I was talking about s c linux in the automotive space where you can use s c linux to harden containers And now I feel it together with the s word you have a real Uh real use case where you can really get good benefits from using this kind of mandatory access control Yeah, so not only exploits also the data needs to be invisible So even if you can't really breach it, uh, you can't really see your packet traffic either So this is uh, when you look at the containers, um This is maybe maybe something that Really, it's a bit they can they can careful look Especially now since yesterday John Corbett mentioned about this route access using namespaces Somebody chooses to use containers for this kind of cloud cloud system you need to harden your code or your code review Security policies. I like this slide a lot. It's really simple. But you know If you when you start thinking about security in the beginning, it really can have an effect Like of course, you know firearms prohibit that you won't do anything. It's like, uh, but if you start, you know Designing security in from the beginning to tell people that needs to be secure It's kind of explicit instead of you know being implicit Actually one of one of the customer meetings I had recently Uh, we had to discuss on about security and Then there's yeah, of course, it needs to be secure That's uh, in a way the strong approach you need to kind of say explicitly my system needs to adhere to this It needs to be secure in this and this way Otherwise, you will kind of rely on you know, people perception of security and it will just be there Uh, the final part in this section, uh full spec management interfaces Um And what I mean about this is is basically the the apis open stack provides, you know bottom, you know Implementing the plugins and they be extending the apis that Open stack provides now so collaborating with the open stack project In the free scale presentation earlier, that was really interesting that they have actually Created the open stack plugin to allow this kind of NFV Applications to be created using the open stack. So this is exactly what I'm talking about here To some degree also some degree. I feel that this full stack management interfaces Is something that comes also from within the kernel of the hosting hypervisor So that you will be able to access the higher level things and also the lower level things you kind of In my mind you have a hierarchy where you see the world in the beginning consuming like in google maps You zoom into your one Kind of hypervisor host component and in there you see individual virtual machines Inside the virtual machine you see different applications inside the application You see libraries and you know Virtual memory maps and then you can use virtual machine introspection This type of toolings to see exactly the memory Bytes and bits that are going inside Inside the virtual machine and What I feel that This kind of the more abstraction the more layers you're building the software the more Debugging problems you would tend to have between those layers So when you use a debugging tool having this kind of monitoring and managing interface that goes across layers Is very important Maybe it can be different solutions, but you kind of need something to bind it together And lip word Is one thing lip word does a lot of things And of course being open so you can extend it and this is what we're also doing We're working on all of these and we are enough going around this Like like also was discussed many times today is that it's hard to make a roadmap because it moves so quickly How can you create a roadmap that goes like four years ahead if you know one year Things like open stack and SDN and big data and everything just kind of Boost like four times more More contributes and four times more companies around open stack A couple of years ago, uh, there basically wasn't an open stack Okay, uh, really further work on the topic. Um, I only have really a couple of bullets Then we can then we can talk about if you have, uh, totally differing views or Something something else you think that's good and let's go let's go over that The way forward and I'll give way to penguins. That's uh The kind of nice nice picture is that bingo going around here in in this conference a lot Yesterday, Linux is taking over the world and you know going at these these things every year It's it seems to be you know going that way, you know, we're taking over big data and cloud now and KVM is coming as well It's really different things here So regarding real time This kind of scheduling problem is is quite quite difficult to solve actually in a general way Inside the guests and the the hosting hypervisor There are some some effort being made inside this kind of cooperative scheduling Regarding real time also the and the use of resources So cooperative scheduling basically meaning that you will expose some of the interons of the guesting operating system To be able to schedule everything in a much finer granular way Kind of para virtualize the scheduling inside the guest domain So you will be able to instead of you know Lifting one virtual machine on a pedestal You will be able to create domains within that virtual machine say that there's one application that you know Once in a year it needs to act Very quickly, but uh when it does then it needs to get the priority regardless of everything If you have that and the other other parts of that virtual machine, I don't doing anything that important How can you kind of solve that scenario? These adaptation of applications Um, that is something that's ongoing that's worked on a lot of different Different movements and talk around it So there are a couple of options what we're thinking about there's this open event machine Sort of a api for abstracting out Things like a lightweight executive a simple executive going lower level things that access the hardware accelerators directly If you're going to find an api to access those in a hardware independent operating system independent way That's a major advantage NFV, maybe it's uh, it's gonna be different different bullet would be better, but uh NFV That's a really new thing that seems to be ramping up very quickly also In the embedded space we need to address that as well performance and manageability uh well open stack really Even when I wrote this slide seems to be going old So like every every hour you have more stuff that you could put in here Really the full stack management interface is what I mean by this bullet here so we really need to Wrap our heads around how can sort of the lower level and the higher level play together Then a couple of uh really like concrete things that they are doing Is that you know KVM uh coming from the fast boot space uh in in Montevista, you know having a legacy and automotive fast boot and such things Uh, we think about simple enhancements like virtual BIOS enhancements getting your stuff to boot and start down faster You know remove some of the kind of extra baggage that's each KVM guest needs to carry Optimizing that for a particular purpose And resource allocation guarantees, you know when you stop and start virtual machines you will Sometimes run these scenarios that you know your resources are not available anymore there You need to create a certain kind of caged environments where you know that what you have available and whatnot And uh using containers you can also Create this kind of IOMMU usage It doesn't have to be explicit to KVM. You can increase further they kind of Compatible innovation by using using some of these virtualization hardware features within the same canal Yep, thank you So, uh, basically what I feel that uh There's a lot of discussion ongoing a lot of a lot of solutions So, uh, I presented one one solutions stack that we have been thinking about Here in Montevista coming from our background with our customers and their needs It might be a bit different depending on who you ask what kind of solutions that you should build Um, well, yeah questions and comments. Are we looking at the right things? Are we looking at the wrong things? Uh questions about some of the things I