 There you go, Robert. What do you think? It's the October 96th. You write it? Why? Why not? You give it to me? No. I'll give it to you if you do all the work. Okay, sorry to it. Oh, right. Okay. I'm Peter Robinson. I work for Red Hat on IOT and all sorts of various other bits and pieces. My co-presents are over-hiding to the side here. I can introduce himself and actually get into the line of the camera. Yeah, hello. I'm Robert Wolfe, 96 boards community manager for Lenaro. And... Yeah, so we're just going to talk about AI and machine learning using Fedora as an operating system and STAC on the 96 boards and AI platforms. So Robert will give a brief overview of the 96 boards ecosystem. We'll cover some of the various hardware for AI and machine learning, how we're going to do AI and machine learning on Fedora and some of the pros and cons and issues we are facing in dealing that and how we're going to try and tie it all together between the 96 boards AI initiative and Fedora as a whole to give a cohesive sort of end user opportunity across a whole range of hardware. Great, yeah. So as Peter mentioned, I'm going to first talk about 96 boards, the ecosystem in itself and then how it relates to Fedora and how we're going to move forward with 96 boards and Fedora. So the 96 boards ecosystem, I'm guessing at least some of you have heard of what 96 boards is, but basically it's a single board computer or an open hardware specification. I'm going to kind of quote this right here, but the hardware spec is open, not the hardware itself, right? So it's an open hardware specification. And I'm going to talk a little bit more about that in just a second, but first I want to give a little bit of a history of what Lenaro is and what 96 boards is and why we started 96 boards. So Lenaro, as you can see there, founded in 2010, originally was created to reduce the redundancies and fragmentation in the Linux on ARM ecosystem. And there was a big problem that they faced in this five-year gap, which was basically trying to provide the development hardware, the platform for the developers to work on. And back in the day, not even that long ago, trying to get your hands on this ARM-based hardware was, first of all, very, took a long time to get it. And second of all, it was very expensive. And so a bunch of big companies got together and said, hey, you know what, let's design a specification. Let's call it 96 boards. And we're going to make it easier and cheaper for people to get their hands on this hardware that they need to develop on. That's when 96 boards was born. And so you have a specification, SOC agnostic specification that allows vendors to come together, build boards out so that, you know, you can develop on them and do what you need to do for reasonable price. All right. So as I mentioned, 96 boards is a series of open hardware specifications. Right now we have three specs. You have the consumer edition, the enterprise edition, and the IoT edition. Right now, we're trying to work on a strong hardware and software story. This is basically creating the layer of hardware for you to develop on. The operating system, which we're working with Fedora, Peter right here in Fedora, to get a strong software story at an operating system level. And then after that comes the application level so that you can develop on multiple system on chips without really feeling too much of a difference when you're transitioning your development. Our model is a partner-based model. So 96 boards as a whole, you have a bunch of partners that come together, industry partners, and they are the ones who decide which direction this takes. Peter, in fact, is one of those steering committee members representing Fedora, Red Hat on the board there. But there are a lot of members we have. And Sentos. Yeah, so I represent the Red Hat ecosystem. So primarily at the moment, Fedora, REL, Sentos, are primarily more the community distributions. And one of the things that Fedora in particular provides is that across the boards that we currently support in Fedora, and there's going to be a bunch of extra ones coming along in F29, is that it's the same kernel, the same experience that you get on X86. So same kernel, SE Linux, Grub, various other bits and pieces. So it's just a unified experience. It doesn't matter what device you're running on. Yeah, this is a big deal because, you know, you have partners from all different types of, all different branches of the industry all come together to kind of try and figure out the best way to build this hardware and bring it to the developers. So it is not just one person saying, this is how I think it should be. It's a bunch of people coming together and saying, this is what we think it should be, and then testing it out. With the 96 boards, within the 96 boards ecosystem, we have a pretty vibrant and growing community. And 96 boards isn't that old, but we are seeing a lot of traction in the community and it is growing. So if you do have any questions or ways that you would like to find a way to get ahold of us or how to get, you know, involved in the community, you can reach out to me afterwards. Initiatives, this is one of the ways that we push forward with 96 boards, kind of launching initiatives, and since we are a partner-based company, a partner-based department, we seek a lot of help from our partners. You know, reaching out to RedHack, Qualcomm, I-Links, Avena, Arrow, basically every single person that's involved, every company that's involved with us, we try launching initiatives in parallel with these other companies. So we kind of push forward together. One of these initiatives, for example, is our mezzanine community. This is an open-source repository that we pushed out there that allows people to grab on to all of these design files, whether you're building in KiCat, EagleCat, Altium, and a few others. We put out the templates there so that then you can build on add-on devices. And of course, the 96 boards AI is another one of those initiatives. 96 boards AI is another one, and I'm actually going to talk about that right now. So 96 boards AI, this is an attempt that we are pushing out to basically create a compartmentalized section of 96 boards where we're saying these boards we think are the best uses for AI. And so we basically chose a few right now, and we're calling it kind of like AI-compliant for this purpose. And we want to create the ultimate software-hardware application story for these boards. Now on the top you have the Rock 960, in the middle you have the Xilinx, which is the Qualture 96. It's this one here. Quad-core 64-bit processor, fairly high-end FPGA, and a couple of real-time, capable co-processors as well on board, all in something about the size of a credit card. So fairly powerful, fairly small, sort of like literally pocket-sized. So it's a cool little device. Yeah, and as you can see, looking at the spec, these boards all have the same footprint, and most of the things that companies would say, reinventing the wheel or being redundant in their development or in their IP, you could say, you don't really focus on those things. You know, the general I.O., the USB ports, HDMI, you can create your niche on this footprint without having to, you know, worry about too much IP, right? So here, for instance, if you say this is like the standard consumer edition board, you can see this kind of replicated right here on the bottom half of this one. This is an extended version of the consumer edition board. And so this is one attempt right here, reaching out to AI, and so we're going to focus on the most. But 96 boards is trying to tackle other verticals, creating software stories for other verticals. And this is just one of them, right? So with that, I think that's the last of my slides. So we can talk more about 96 boards after if you have more questions. Yeah, so in Fedora, we obviously are working closely with the 96 board guys. There's also other, so some of the NVIDIA Jetson ones we're looking at for sort of GP-GPU as well. You know, the actual AI machine learning hardware, there's four main categories of hardware. There's the FPGA stuff, the Xilinx, a number of other boards. So Intel's Altira, the latter size 40, is fairly popular. Obviously GP-GPU led primarily by NVIDIA with the CUDA framework. But obviously there's other ones that support OpenCL standard. A new category which we're still sort of which the Rockchips and the High-Silicon Ultra 970 support have onboard neural processing units. And then Qualcomm has sort of a DSP which also provides a neural processing engine. So there's a number of different sort of hardware categories in there that you know, we're working to support well in Fedora. Some of them are currently supported in the kernel and tool chains and what have you a little bit better than others. So AI and machine learning has like a number of different stacks. There's obviously TensorFlow that came out of Google originally that is getting a lot of traction. But there's Cafe, T-Engine, Torch and numerous other sort of high level stacks to do. There's a bunch of low level tool chains. These are currently widely variable. There's the I-40 that I mentioned earlier has quite a good open source tool chain. The Xiling stuff is a bit variable at the moment although the open source side of that is evolving very very quickly. Some of the neural processing engines I'm still sort of investigating because a bunch of them are quite new on the market. But as a result of that, this space is changing quite quickly. And you know, the hardware software, so like TensorFlow, for example is very CUDA oriented currently although there's a number of organizations that are trying to sort of make that less sort of platform specific. So the hardware software interface is evolving pretty quickly. If I could add actually on that real quick. Of course you can. So with the FPGA space right now Xilings in particular they're working on an SDK called SDSOC and with that they're trying to bring the learning curve down for those of you who are interested in developing on FPGAs. So in particular talking about the Ultra96 board that Peter has right here you'll have opportunities to check out this SDK and in fact if you buy the Ultra96 you get a year worth of this license. It is closed license but it is still a very interesting tool. And basically what it does is it allows you to code in a language that you're more comfortable with and then it compiles it for the FPGA so you can actually work on the hardware without being a full blown FPGA developer if you're not familiar with that. So it's pretty interesting. And then how does this very wide and varied ecosystem sort of come together with 96 boards in Fedora. So we're working together along with some others to get a unified experience. So at the moment like some of the devices come with Android on them, some come with very custom distros of Linux that don't support standardized things that a lot of people are becoming to expect like containers and various other bits and pieces. Widely varying versions of the kernel that are generally full of CVEs and especially with some of the recent Spectra and Meltdown stuff are very not up to date. And so we're working to provide a unified experience so that doesn't matter the board you have doesn't matter the hardware, machine learning options that are available you'll be able to get Fedora on there, you'll be able to run containers exactly as you would on x86 and you know and then access whether it be the FPGA or the GPU or whatever so essentially be able to eventually install TensorFlow and have it work on the very underlying units without a difference in experience across there and so a wide variety of AI machine learning hardware but one OS and like everything else in the Fedora ecosystem you'll be able to run Docker and containers or whatever else on top of that in exactly the same way as you would on other architectures with all the expected security level things like SE Linux and Setcomp and basically a unified experience and you know it's going to take us a while to get there in Fedora 29 where it's got initial support at the kernel level for things like the FPGA manager and various other mechanisms and it's going to be a bit of a long road with lots of work to do but it's starting with Fedora 29 and as it evolves over time there's the ICE stack I think it's called which will be usable with the ICE 40 FPGAs and as the open source tools evolve it should in the next year or so become a sort of relatively nice straightforward experience no matter what the underlying hardware is so does anyone have any questions? Steven can you say can you give any examples of projects or initiatives that are working with this in Fedora today or soon what are the specific goals that they're trying to achieve with this? So yeah, so there's a number of different sort of IOT I sort of started to get involved in some of this because I have IOT people that are interested in this working for it in different industries for different one of the projects that I had a chat at Flock last week with someone from Amazon about their Greengrass project which is a project that runs locally using Amazon technologies for local AI and machine learning so that's an IOT gateway use case that is happening there's a number of industries within the IOT that are very interested in FPGA for IOT local processing and intelligence so that's primarily my focus the Ultra96 was demoed at The Last Connect running a real-time number plate recognition or number plate signs and it was processing like thousands and thousands of signs a second I was hoping to be able to demo it but we didn't really have enough time to do a talk and a demo all in sort of 25-30 odd minutes and so but I've been planning on getting that up and running so people can deploy that with Ansible as a sort of standard on the Xilinx FPGA so there's a lot of interest in a lot of different sort of areas for that sort of stuff and I'll add that also there's building an application layer on top of this we've been talking 96 boards we've been talking with Mozilla and their IOT.mozilla initiative and so we're hopefully going to be working with that to try enabling the Mozilla IOT gateway across all of this hardware using Fedora so this kind of like another example of the application layer taking advantage of this unification across the OS and across the hardware that doesn't necessarily use it at the moment FPGA stuff but there are other sort of projects that are like the next generation of the Mycroft hardware is going to use the Xilinx FPGA so we'll be able to my intention is that we can run that on Fedora and that'll be used for like voice recognition Hi, you guys mentioned about writing in your programming language you are comfortable and then you guys would compile it for FPGAs. I just wanted to know how comparable is the execution time of this program compared to FPGA specific programming language like if I had written the same program and something which was FPGA So I think the question is how comparable is the speed compared to programming it through their SDK versus doing it like with Verlog at the low level I don't know actually the answer to that Nor do I that is a Xilinx tool and I'm not sure the exact details about their tool chain From my understanding though for most applications from when I hear them talking it's not noticeable unless you go really deep so you wouldn't notice it that much I'm still trying to understand the why Why would I want to do ML on these little boards? Is it because you're envisioning a future where maybe they have sensors and such they want to do like local compute train models locally is that the idea so I mean I don't know the specs of this FPGA but it's a fairly high end FPGA with basically some ARM compute attached to it so you can run like a Linux distribution to do generic stuff hand over the specifics to the FPGA it's also got two ARM Cortex-R cores on it so you can run like an RTOS at full wheel time on it and the idea is that you know you could embed this in a light pole with a camera attached to do real-time machine learning on the edge without having to push it back into the cloud so you may have you know a low level like Laura or like you know 64K style link that you don't have the ability to push you know gigabits or tens of megabits a second data up into the cloud to process it so you would do it locally you know if in a lot of cases you may not have the connectivity like out in the field or on a ship in the middle of the ocean and so you know it's fairly low power in terms of actual power draw but it's a high end processing that you can offload to do extensive stuff locally it's also really important to note that as developers maybe you're not so much interested in the path to product but there is a strong path to product with these boards so I mean you can work hand in hand with Xilinx and Avnet and the folks that made this board to you know do a chip down design get your own product out there so like once you get up to a certain amount of units it's no longer worth it to put this board in your end product right you're going to want to get rid of some USB ports and save a penny here and a penny there and then next thing you know you know this is your tool for your path to product right so it's a development device but yes and also because it's a relatively cheap relatively standard it's useful for developers to put one on their desk so that they can play around with proof of concepts at little cost using standard tools, standard distro to get things up and running because you know basically you can DNF install just as you would on an x86 server or you know VM running in the cloud Any more questions? No? Thank you very much Thank you