 Okay, I'll start because we are two minutes past already. So I am Anna Snashiv from the Open Source Technology Center at Intel. I will be giving a short update since the last time we talked about Zephyr in San Diego in the same conference basically in the US. And Benjamin Walsh from Wind River will be giving an update or a status or an overview of the unified kernel after my quick intro into what happened since we talked about Zephyr last time. So I'll start with the update and for some of you who are hearing about Zephyr for the first time so it's a microcontroller operating system, very small memory footprint. Basically we can run on systems with 8K of RAM. It's an open source under the Apache 2.0 license hosted by the Linux Foundation and supports multiple architectures. So this is something that the project itself was released early this year and at Numburg and embedded world the project was launched. It's based on an operating system that was there already so it's not something that has been done from scratch. And it's currently supported or I mean we have four platinum members in the project in XP, Linaro, Intel and Synopsys. At the conference here there are a lot of talks about Zephyr so I'm not going to go into the details of the subsystems so we have talks about obviously there will be Ben talking about the kernel we will have talks about Bluetooth, about the IB stack, power management there's also some proposals and a little bit future or forward looking proposals to change how we do things in Zephyr in terms of configuration from Linaro and so there are a lot of topics that are covered in different talks. So I will basically address or talk in general on what we have right now in Zephyr and what we have released in Zephyr 1.5 which was released like one month ago or two months ago and what will be coming in the next few years but in terms of a basic introduction Zephyr has the goal of providing an OS that runs based on MCUs for wearables and IoT where the cost of silicon is minimal. So that's why we have the footprint is one of the main things that we always look at and monitor. It's very important for us that we stay with these goals that we have set when we started the project. So we continue adding features, we continue supporting additional architecture but the one goal that we have is actually footprint, memory footprint we want to be able to run on the smallest of devices. And that's why actually from the design point of view we made Zephyr highly configurable you should be able to go and configure in or out any feature that you have in the kernel. The kernel right now as we have it today operates in two modes so there's the nano kernel and the micro kernel. There is no user space and there's no dynamic runtimes. This is something typical in this area where I mean there is no you just build an application a single address space and that's actually what also helps with performance and with footprint. Memory and resources are allocated statically so everything I mean you usually know exactly how much memory you are consuming before you build. So when you build an application you know exactly where it would fit and how actually it will be deployed on the system and it's something just for example we don't do any dynamic interupt handling. It was a feature that we had but we removed that. We do everything statically. Device drivers are defined statically as well. So you know exactly what you have when you build a system. It's cross architecture so x86, ARM, ARC and recently we added support for the Nios 2 which is an FPGA platform from Altera. So this is how we have it right now and this is actually not more different from what we had in in in April when we presented that in San Diego. Actually there are major changes happening right now so I mean to start with the kernel and that's what Ben will be talking about so we actually are having we are removing the distinction between micro and nano kernel and we are creating one kernel with streamlined and enhanced ABI's and that supports both cooperative and preemptive scheduling and Ben will be talking about that more in details just after my intro here. The other thing that is happening and it was mentioned in some other talks earlier is that we are working on a new IB stack so if you are familiar with ZIFR we started with an IB stack that came from Kuntiki. We adopted that for usage in ZIFR but there is also a talk about this I think tomorrow or on Wednesday I mean going into the details of why we are doing this and what the challenges we had with the IB stack that we have right now and why we are implementing or starting to create a native IB stack that is specific to ZIFR. So both of these combos are obviously are huge in terms of development of effort and code size and both of them are happening at the same time so obviously we are trying to make this happen for the next release and when we are talking about releases it's basically we started with monthly releases early in June we moved to a quarterly release cycle so we released 1.5 in the end of August we are going to release 1.6 end of November and this will continue over the next year hopefully. So every three months we will have a release cycle and we are trying to follow in a loose way the kernel development model with merge windows and stabilization period. So it was tough with the monthly cycle because you couldn't get a lot of features in now it's actually becoming a little bit more flexible that's how we are able actually to do major changes like the kernel change that we are introducing in 1.6 and a little bit more about that later. So in ZIFR 1.5 we were able to introduce support for TLS and DTLS through integration of MBTLS we started adding IoT protocols, MQTT, NATS, HTTP and so on and that was done on top of the current or the micro IB stack from Contiki we also added TCP support, file systems support a lot of activity and a lot of development on the Bluetooth front and of course we added a new architecture here the Altera Nios. So this all happened in 1.5 obviously this was summer time everybody on vacation at least I was on vacation so there are no major features there that's what we actually are keeping for 1.6 So the major change for 1.6 that is happening right now is the unified kernel this is already in master we are currently operating on basically we have the unified kernel but we are still able to build the micro kernel and the nano kernel and Ben will talk about it in a bit we are working on a native IB stack so this is also working on progress it's happening in a branch right now but we are planning to measure that if everything works well by end of next week hopefully the current status there is that we have the best liars of the IB stack the only thing that is preventing us from merging is TCP so this is a little bit delayed our goal is actually with 1.6 to have feature parity with what we had with the Kuntike IB stack and basically TCP and IBV4, IBV6, UDB, 15.4, I mean 6 low band and these are things that we have already TCP is something that we are trying to get into the tree and hopefully we will be able to meld really soon Cortex M0 is another big thing that is happening right now it's not in master yet but the batches actually have been submitted or were submitted last week and they are undergoing the review process right now this was contributed by Linaro and in cooperation with Nordic so a lot of people, a lot of groups are actually working on that and this would move us different to the low end devices from different end vendors so it's exciting news and this actually would be in 1.6 and there are a few boards that would have the support immediately like the Nordic NRF51 which happens to be also on one of the reference devices like the Arduino 101 so Bluetooth linklier is another big change that was started this summer or added, contributed by Nordic and it's currently actually being worked on so this is already in master and it will be in 1.6 and there will be a lot of boards and SoCs that have been added or will be added over time there's a lot of activity now on the ARM architecture front so we see boards from STMicro, Atmail from ARM from Linaro, the reference board that was presented in Linaro Connect two weeks ago so there's a lot of activity there and with 1.6 we will have a large set of SoCs and features available to us just doing a time check, ok we are good and to be able to support this wide variety of SoCs and boards we are basically trying to abstract the device drivers as much as possible this is something that started with the few boards that were introduced or came with Zephyr when it was launched obviously we are learning now that to support the ARM SoCs the ARM boards and Intel boards and ARC boards and other boards we really have to be flexible in the way on how we do things and one way actually we do that at least in terms of how we support the different architecture is that for Intel there's the QMSI, the Intel Quark Micro-Controller software which abstracts or gives us a complete health solution or driver set for Intel Quark SoCs similar to what ARM Cortex sorry, the census from ARM provides so we are trying to abstract that as much as possible and to enable support for these SoCs using these vendor provided headers and ABIs as much as possible of course you can go and implement your board or your SoC directly using the Zephyr ABIs but the idea is that all of the complexity of hands and adding drivers is obstructed using Zephyr ABIs so I can actually write an application for an Intel MCU and still move it or use it or build it for ARM and still get the same functionality obviously if I have the same peripherals attached so if I have a sensor on an Arduino 101 and the same sensor on the freedom board actually my application should be from an XP should be actually exactly the same I shouldn't be changing anything we'll be talking Zephyr ABIs the kernel doesn't really care about the architecture when you configure it so it gives you application portability and from an end user perspective this is actually great news and in terms of how we do that on the surface so we have here different levels of how we actually do this obviously there is the architecture there are different CBUs or cores implemented by architecture different SoC families, serious and different I mean this is we learned basically that if you want to support this level for example when it comes to ARM and also at Quark for instance you really have to create this multi-layer architecture because otherwise you will not be able to support the variety of SoCs on boards and that's exactly what we have right now so try to share code as much as possible among for example Cortex-M obviously you don't want to implement that for every SoC and when you go up for the Nordic or Nordic NRF 5 some code will be shared even between Cortex-M4 and M0 especially when it comes to drivers same thing with the Intel platform some of the drivers would run on the QRE and the SE I mean it's almost the same hardware so there is no reason why everybody actually goes and implements their own drivers for every SoC so driver sharing and code sharing is one of the goals that we have in the Zephyr 3 right now in terms of security this is basically the basic building blocks this is not how we want to do security on a grand scale this is the functional areas of security that we have implemented at least in Zephyr itself so we have TinyCrypt, we started with TinyCrypt we recently added MBIT TLS to support TLS and DTLS in this case there are a lot of developments going on to actually support device management, ABI's for example to abstract the hardware the hardware crypto layers that come from different vendors and secure key storage our goal is actually usually to provide ABI's where it makes sense for people to abstract the hardware and the implementation at the lowest layers and that's where I tend to bend to talk about the unified kernel so go ahead testing all right so I'm going to talk about the unified kernel for Zephyr so the first question we might have is that why do we call it the unified kernel so like you could have seen in the NSS presentation the Zephyr kernel actually consists of two kernels we have two types of threads in Zephyr one of them is the cooperative threads which we call fibers and the preemptible threads that we call tasks and unlike what you might have thought about how it is implemented it's not implemented in one kernel it's actually two kernels that are responsible for the different types of threads so the nano kernel is at the core of the operating system and it only knows about one task and it knows about all the fibers so it manages the fibers and the micro kernel is actually written as an application on top of the nano kernel and that one is task aware so it manages the tasks and it doesn't really know about the fibers on top of that we have two sets of API so one for each kernel so we're hearing this for the first time that might be your reaction so of course there's a reason for why it was built like that so the first thing is separation of duties so the nano kernel is basically responsible for all the architecture specific code so if you want to port Zephyr to a new architecture platform you only have to port the nano kernel the micro kernel on the other hand is called written in C and it's completely architecture agnostic originally the nano kernel was only meant as a means of implanting the micro kernel so it was not really like basically in the original operating system from where this comes it was never meant to be run on its own another reason was that the micro kernel is what we call VSP capable so VSP means virtual single processor which means that you could have a form of CPUs connected between each of them by some hardware link layer and they would be basically operating as one CPU so you could have a task on one CPU and a like a semaphors thing as in a second CPU another task on a third CPU and they could use the semaphore to synchronize between themselves and the VSP layer the communication was actually written like in a message driven paradigm which was using some kind of send receive reply so basically you send a command to the kernel and you wait for the reply to come back before you continue execution so that gave us some good things so first of all the nano kernel scheduler is extremely simple which makes it very very fast especially when coming out of an ISR I'll show that a bit later the nano kernel is also extremely scalable like I said he said about 8k maybe for a system but if you're not running a lot we can scale a kernel down a lot like for maybe 2-3k's of code and data that gave us a mix of co-op and printable threading so you can use co-op threading for example your application you just want multi-threading but you don't want to bother with locks so you can have basically implicit locking by having your threads just yield when they're done with shared data structure instead of having to take a mutex or semaphore to synchronize between themselves we have per context type API so that means that we have functions for tasks and functions for fibers and depending so basically that allows you to call APIs without the API having to do some runtime checking to decide what's the exact logic that it has to use the micro kernel is implemented as a fiber so as a thread that's known by the operating system so every API call into the micro kernel basically is single threaded to that fiber and that allows the implementation to be done without much interrupt locking or actually the micro kernel doesn't really use interrupt locking at all so that gives you a very good interrupt latency because like interrupts are pretty much never locked the out of ISR path is extremely efficient because the nano kernel doesn't have to take a lot of decisions and like Anna said actually the code base is from an old operating system that WinRiver acquired around 2000 so it's been in platforms in the wild before so two kernels give us two schedulers the nano kernel scheduling is very very simple so basically it's aware of all the fibers in the system has a run queue of them in prioritized order and it knows about one task so it's going to schedule the fibers like in the order that they're queued and when it's done with all the fibers then the task has lower priority and it's going to context switch that in at that time if you add the micro kernel on top of that like I said the micro kernel is implanted as a fiber so basically it ends up in the list of fibers like any other fiber from the nano kernel's point of view and the micro kernel itself has run queue of all the tasks that are ready to run in prioritized order with optional round robin if you have multiple tasks of the same priority the micro kernel is going to schedule in the tasks based on their priority and what it does really is that it tells the nano kernel which task is the task that it has to know about at this moment in time so this is the out of ISR scheduling like I said it's pretty simple since a fiber cannot be preempted by another thread in the system when an ISR exists it checks if a fiber was running and if so it just goes back to running that fiber without taking any other risk scheduling decisions if a fiber was not ready it means it was the task so it's preemptable and a fiber becomes ready goes and context switches that fiber if not it goes back to running the task which is pretty simple scheduling so we had some good there's also some bad the fact that we have two sets of APIs one for the nano, one for the micro and on top of that there's only a subset of that API that's available like at all time so of course if you're not running a micro kernel you're only running a nano you only have access to the nano kernel API and if you're running a micro kernel you have access to most of the nano kernel API but not completely because there are some clashes between some micro and nano kernel APIs another problem is that based on the context type like if you're a task or a fiber you might have a different behavior on the APIs like tasks might have to pull while fibers might be able to block and the big one is that there's no task to task transition in the micro kernel so if you do an operation if a task does an operation on a micro kernel object and that causes a context switch to another task there's actually a transition into the micro kernel fiber first so that it actually does the operation on the micro kernel object and then decides if there's a context switch that has to be done and at that point the context switch happens into the second task so this is shown a bit here with basically how the micro kernel message passing works you have a task that's going to want to operate on a micro kernel object it's going to create a command packet on its running stack and then it's going to push that packet to the kernel server stack object that's a different kind of stack but anyway so that's going to be queued then there's a nano kernel context switch that's going to happen in the kernel fiber which is going to run, dequeue the packet look at what the packet is do the operation on it go back to waiting for the next packet and then context switch back in to either the task that was running or another task if a context switch from the micro kernel point of view has to happen so you could easily see here for example if you're operating on a micro kernel semaphore and the semaphore is free there's still going to be a context switch in the micro kernel server fiber and a context switch out back to the task that will receive the semaphore this is a bit more complex example I'm going to run through it pretty quickly but it's taking a micro kernel semaphore if the semaphore is not free so let's say task B which is of lower priority then task A owns the semaphore task A has to do a semtake and then it pushes the command packet for semtake on the micro kernel stack which invokes the micro kernel queues a packet context switches in the micro kernel server fiber which dequeues the packet updates the way to release on the semaphore because task A wants to wait now the micro kernel is going to see that task A is blocked so it's going to context switch in or tell the nano kernel that task B is now the task it has to run goes back to weekend for waiting for a packet does the real context which task B decides to give the semaphore creates a current packet on its stack which invokes the micro kernel you can follow here basically it's going to dequeue that packet type context switch in task A which finally is able to swap its packet from its stack and it got the semaphore at that point so that might seem a bit complicated or a bit silly but you have to think about what I was saying about VSP earlier and you can imagine that instead of only doing this locally on one node or one CPU that could have been like a packet could have been pushed by another CPU so basically you had your another CPU communication mechanism with this again with the bad we have some duplicated APIs between the nano and the micro like we have micro kernel semaphores and nano kernel semaphores that behave pretty much the same we have 5Os in the two cases but they're completely different and timers are completely different too the per context type APIs cause that to create some problems around them because you don't always know what context type you're calling you want to operate like on a semaphore object anyway that caused so basically all the gains from from calling an API from calling an API of a context that you know of is lost when you go to the wrapper the fact that we have fibers and tasks too we had different types of behavior so at first tasks had to pull on nano kernel objects when waiting for them while fibers could pan and based on the object or in the context switch sorry based on the object and the context type that you're operating on and operating from you might not be able to do all the operations especially on micro kernel objects for example tasks can do anything but a fiber cannot give or cannot put data in a micro kernel file so after the good the bad of course you have the ugly so if you're writing some middleware or drivers and you want to support the whole gamut of zephyr or like zephyr as a whole you have to target basically two operating systems you can write code against the nano kernel and that should run on a micro kernel without pretty much any problems but you're not going to use like the richer API from the micro kernel if you do that still with the ugly so this was causing some confusion with developers and even like seasoned developers so we had some shortcomings in the original especially the nano kernel API so nano kernel objects could not be waited on on a time out we added that you couldn't have multiple waiters on nano kernel objects if a second fiber wanted to wait it kicked the other one out basically you lost that fiber in your system we added that too we added the device sync object that basically abstracted the nano and micro kernel semaphore for drivers and at the like somewhat recently we added the capability for tasks to depend on nano kernel objects instead of polling but all of those things are pretty much cludges and we don't really like this so one of the lessons we got was that people find this hard to use so we learned our lesson what do we want to do we want to unify the kernel as a solution so the approach is quote remove the micro kernel by that I don't mean that we're going to lose the API or anything like that it's just that we're going to transition the scheduler and the API into one kernel which is built based on what the nano kernel was we provide a completely new set of API similar but it's new so that there's no confusion and that's basically an animation of the nano kernel and micro kernel APIs so what that gives us is a more well known preemptible artist model that's somewhat similar to something like vxworks if you're familiar with like I said we have the new streamline API there's no duplication anymore so there's only one type of semaphore there's no micro and nano whatever we don't have the context sensitive APIs anymore either because most of the time people were using the wrappers anyway so there's no point of having a wrapper around an API that gives you supposedly better performance so all the decisions are taken at one time now we were able to reuse a lot of the code mostly the algorithms from the micro kernel objects that were implemented so we changed the implementation but kept the way they were functioning we're removing the micro kernel fiber of course so we're saving probably like 2, 3, 4K of RAM in that case because you don't have to provision it with a tone stack anymore and we're providing actually a full legacy layer API on top of the new API so if you have code running on Zephyr right now your code will be able to at least compile on the new kernel you might have to do some slight adjustment it might need more space for example but we're looking into that right now so with one kernel you have one scheduler now instead so basically the kernel keeps track of what threads are ready at each priority we don't have fibers and tasks anymore we only have threads and their differentiation if they're co-op or printable just depends on their priority so positive priority you have a printable thread, negative priority you have a cooperative thread and you can actually switch at runtime between one behavior to the other just by adjusting a task priority and on the right you can see that depending that co-op print fence you can end up with an all-pramp or an all-co-op system if you desire and that allows us to actually take some of the code out and the decisions out and you can have a faster running system in that case out of ISR scheduling is a bit more complex now so a bit slower on some cases because we have to take more decisions but the case that we were more interested in in keeping at the same speed was going back to a cooperative thread that was preempted by an ISR and for that we're actually hitting our goal there also if you decide to go with an all-co-op system you only need this part so code reduction there and the decision tree is simpler as well actually there's absolutely no decision to be taken if you go with an all-pramp system you only need this part so you're cutting a bit of decision making as well so this is the same example as before taking a semaphore in the unified kernel which becomes much more simple so 3D wants the semaphore semaphore is updated with 3D wants to take it so it waits on it context switcher of course goes context switches in thread B which gives the semaphore which when it does that it dequeues thread A, thread A is a higher priority get the semaphore much more simple than before and that's the model that people are used to basically some other benefits that the unified kernel give us so we have a separate thread a separate idle thread from the one that caused some problem when tried to pan especially during some driver initialization now that's gone we don't have to have a workaround around that anymore you can have the all-co-op all-pramp modes we're taking great care to write deterministic code so that means of course not taking interrupts for unbounded amount of times or running through loops or having interrupt like running through loops that depend on the amount of tasks in the system or like packets on a queue or something like that we have a new preemption locking that basically mimics what was happening with the kernel server before so instead of single threading a thread you're able to lock preemption that gives you pretty much the same behavior but very lightweight because you don't have to do the context switches we have on the fly co-op to print searching like I said just by adjusting priority and we're prepping for tick list kernel so the API is not for waiting is not tick based anymore it's based on the wall clock time and so some numbers so our main goals were to improve the micro performance and footprint mostly so that people have access to the richer API but with way better performance to achieve that we expected a bit of fallout on the nano kernel only systems so we're getting like we're having a bit worse performance when you want to context switch like co-op to co-op or ISR into a printable thread that was running but that was expected because of the scheduling decisions that have to be taken now but if you look at the improvements for on x86 first so the very first one might seem like not like an improvement but actually that was one like I said one of the goals we were trying to hit if you take an ISR while a co-operative thread is running and you go back to that co-operative thread we don't take any performance hit right now all the rest are pretty good improvements actually if you have to do a reschedule because a printable threads where it's interrupted and you want to go into a new printable threads we're getting like two and a half times performance two times on a printable thread two printable thread context switch that's basically that happens because you don't have to do a double context switch twice anymore operation on mutex are two times faster two operations on semaphore actually we're getting like six plus times faster because of all that context switch that doesn't have to happen and the semaphore code is extremely simple on ARM we're getting pretty much the same thing except like on semaphore improvement which we're getting actually 10 times to 20 times faster times and one of the reason is that the with the ARM context switch is implanted is via exceptions we're talking about Cortex-AM here so if people are familiar with that basically that's what the with that ARM suggests that the context switches are implanted so this is an ongoing effort we're not done yet so we're mostly focusing on performance and footprint right now we want to reduce the nano footprint we want to improve the performance and co-op only mode and hopefully get some more performance in micro in micro kernel only sorry in micro kernel printable only mode so that's basically it it's time to thank you Ben so in terms of summary I mean the project itself is open source it's open for everybody I mean you can look at zepharproject.org that's we have Gerets, Geras everything is done in the open we have mailing lists and so on so please visit the project to get involved there I mean as I said earlier it's modular configurable and scalable design that's the goal we need to be able to run on the smallest of devices but also use the rich features that will come with the unified kernel and that existed already before on the complex or complex hardware configurations secure and connected this is something obviously we are still working on in terms of IB and networking but we are actually very well connected when it comes for example to bluetooth I mean visit our booth upstairs we have a few demos there using BLE yeah and it's open for your contributions yeah I mean this is basically the project everybody can impact the architecture and the direction with code obviously not with talk only and it's open source project open for everybody and again as I said zepharproject.org and if you have any questions mailing list so I'm rushing through that very fast because we are running out of time and I want to leave some time for questions so if anybody has any questions to me so Ben go ahead so up to now actually if you have asked me this question like two weeks ago I would have said boards and drivers but that seems to be happening now out of nowhere we are getting all of a sudden lots of contributions so that's really cool but that doesn't mean that you shouldn't contribute there because the more we support the more code the more drivers we have this actually gets more developers to join the project so this is one area the other area is which I highlighted earlier is the IB stack and what we are trying to do there with the native IB stack so we want actually I mean we are confident that we will get the basics right but we want actually to accelerate the adoption and implementing things on top of the IB stack so protocols, MQTT, device management we are working on all of these things but we want actually to move away from just implementing a protocol and get two developers and tell them ok go figure it out we want actually to cover the whole thing end to end so in most cases this will not be part of Zephyr but we want to be able to document it so people actually can take these ready to use use cases and go crazy and improve on top of them go ahead so when I went to the home page there was one point which really surprised me about governance when it said that like maintainers of substance but come from a project from a member of the project a paying member of the project which is completely not what I am used to working in the development ok is there any plan to change this in the future so that really people from the community can take part in maintaining the software I think so this is our governance I mean we have actually some of the board members here actually all of them and we have a meeting tomorrow so this is actually something we need probably to put to the agenda yeah actually I think this was discussed also a few weeks back or was highlighted or something that need to be discussed so I mean I'm sure we have some of the board members here so I hope they are taking notes so yeah I can see one taking notes already so ok yeah but I agree with you I agree with you and this is something that I was surprised to see as well a few weeks ago probably it went through without nobody thinking about it and but that's yeah we'll figure it out yeah we'll talk about that any other questions yeah go ahead so at the early stage actually we were thinking about I mean there was some interest from different parties but this didn't so actually we started preparing by adding MIPS cross compiler to the SDK and it was mentioned at some point probably on some of the slides that you might see yeah but nothing happened much there yeah I mean there was some interest and I mean I'm surprised that there are people working on different architectures behind the scenes and they would just come and drop it I hope this will happen with MIPS as well but right now from a project I mean I don't see anything happening regarding MIPS if you are interested this is where you can contribute another area for contributions MIPS I mean recently we added the NEOS support so this was an architecture added from scratch it was actually done was done by one developer for I mean it went for 2-3 months I guess yeah and we actually using this board we were able to fix a lot of the issues in the kernel that usually people will have with boarding to a new architecture so the next architecture that will come in something like MIPS will have it much more easier yeah so yeah it would be great to have MIPS as well yeah any other questions you had one yeah how mature are these new hardware adaptations for the NRF 51 to FRD MK64F can I just flash an image to these boards and start using Zephyr on sports yeah so there is there is the base board or the SOC support I think this is like things that is happening right now there's heavy development at least on the NRF 51 because it's going in right now the NRF 52 has been there for a while I mean there is the basic IO drivers I think some people from Nordic here know better or somebody from but you get the basics working yeah and the Bluetooth functionality with the linklier and so on but we are trying basically right now to unify the drivers and use the SDK from Nordic for some of the drivers so I mean the exact status sorry I don't have the details there but if you come to the booth we can get you with some of the developers who knows better about that and you still need a patching newt for the NRF 51 to get Bluetooth working right now yes but soon we would be able to run Zephyr on the NRF 51 yeah I mean this is actually what is happening on the carbon board from from Linaro where you can actually have Zephyr also running on the Bluetooth device as well and run Bluetooth so you will have Zephyr running on both calls on the STM on the NRF 51 same thing same story on the Arduino 101 we will have three Zephyrs running on the core on x86 on arc and on on yeah so this is actually great showcase for how Zephyr can actually be running on multiple on the same core on the same SOC any other questions okay thank you very much have a good time