 giving you perfectly portable AI. Thank you very much. Have a great week, and it's great to be here in Berlin. All right, thank you, Mark. Now, I think I promised some AI demos earlier. We've actually got two of them to show you. The first one is going to demonstrate GPUs in a public cloud environment. And the second is going to show off FPGAs using Cyborg in a private cloud environment. So to kick off the first scenario, I'm happy to welcome up the OpenStack NOVA PTL, Melanie Witt, and a man who wears many hats in the OpenStack community, including OpenStack Ansible PTL. He also happens to be the vice chairman of the OpenStack Technical Committee, Muhammad Nasser. Come on up. So I think everyone has mostly seen the slide here yesterday, and we talked a lot about the transformation of OpenStack with all the new infrastructure use cases that are coming up. And today we saw a lot of really interesting use cases and a really cool demo by AT&T that really falls into a lot of these new use case scenarios, such as using OpenStack for NFV. But today what we want to also talk a bit more is one of those other unique use cases, which is AI and machine learning. But to kind of get a little bit of intro, we wanted to maybe start by talking a bit more about the project that is involved in helping make this happen. OK, so Muhammad has told us about how OpenStack as a whole has been evolving with emerging use cases. And I want to zoom in and tell you a little bit about Nova and how it fits into all of this. Nova is responsible for Compute. It is the service that provides the REST API and components for provisioning servers. Nova has a lot of cool features that enable interesting use cases in AI and ML. For example, the PCI pass-through feature enables OpenStack Clouds to offer special Compute resources like GPUs. So using the REST API, you can request GPU resources, and Nova will locate a Cloud host that could provide the required GPU and pass it through to your server. Muhammad and I wrote a speech recognition program to demonstrate how you can leverage the power of GPUs in an OpenStack public cloud. And we're excited to show it to you today. We ran the program on a video clip of one of our favorite community members, Tim Bell. And we've added something extra to it for those of you who can read German. You get around 1 billion collisions every second. So each beam has bunches around 100 billion protons. They pass through each other at the experiments. And then out of that, we then get simultaneous collisions occurring inside the experiments. And this is one of the things that's driving the computing needs, which is that we have to be able to handle all those collisions and then separate them out into separate, different, and distinct collisions. Cool. So I just kind of wanted to reiterate on what we just saw. So what we just saw is actually a small tidbit of Tim speaking at one of the past OpenStack summits. And what we did is we took that audio and we fed it into an open source algorithm called Deep Speech, which is based on a paper written by Beidou on how to do speech recognition. So this Mozilla implementation is all open source and available for anyone to use. And it uses the GPU acceleration in the public cloud in order to do this much faster. After that, we actually fed that data into a project called Jamspell, which is another open source project. And that one actually did spell the correction in order to make the text more accurate, because speech recognition sometimes might make some typos, just like we do. And then after that, we took that and fed that into a translation API, which provided the German translation, which hopefully did a pretty OK job. So again, this is possible to turn on any OpenStack deployments as long as there's GPU. But what we wanted to focus is really how much faster is this? So here we see kind of this small graph where we have all the different timings of what we just saw. And we have a comparison of the GPUs versus the CPU. And the one to kind of really look at is the real, which is kind of real time and how much time actually happened in order to execute this script. And we see that it's actually twice as fast on a GPU rather than a CPU. But it's much more interesting to actually see that in perspective when we run both at the same time. So the one at the top is actually running on the GPUs. And the one on the bottom is running on CPU only, with no GPU acceleration. And as we kind of watch it go, we'll notice at the start it probably catches up. They're somewhat close by. But then the GPUs really start speeding up. And you can really start seeing how much faster GPUs help to accelerate these very unique workloads that involve a lot of very unique processing requirements. And so we've showed you how much faster the program runs on GPUs. But you can even try this out for yourself. We've shared the code on GitHub. And you can run it on any OpenStack cloud with GPUs using the same APIs. With OpenStack, you can avoid vendor lock-in, which brings a lot of cool benefits. You can have a multiple cloud strategy. You can bring your own hardware if you want. You can burst to public clouds while maintaining a private cloud. And all of this is easy to do because it uses all of the same cloud APIs for all of it. So we've showed you something cool that you can do with GPUs on a public OpenStack cloud. And coming up next, we have another exciting demo using FPGAs on a private OpenStack cloud from Jifeng Huang and Mark Collier. Thank you, everyone. Wow. That was incredible. I don't know if you all realize what they just did, but I believe they just turned Tim Bell into a German, which I didn't know it was possible. Hopefully it doesn't cause an international incident. But let's kick it over here to hear about our next demo. So take it away. Thank you, Mark. Good to talk. Actually, the previous GPU demo brings about an interesting phenomenon that has been all over the industry is the accelerators. Accelerators such as a BGA, GPU, and B-Card, SmartNix with ARM associated on it, ASIC trips. For example, how we just released our AI ASIC trip SN. So all these types of accelerators are being used more and more to support applications like AI, edge computing, MEC, and HPC, stuff like that. However, there is a significant gap between this infrastructure change and the management software. If we want to build a truly end-to-end system to support your service, you have to fulfill this gap. So Cyborg is a very new open-sac official project that we have spearheading for just a year and so. And Cyborg is a general management framework for all these accelerators. So next, I'm going to walk you through a very simple demo of how you can use FPGA in a cloud environment to accelerate a video recognition task. Video recognition. That's on school. Let's see it. Yeah, it's pretty cool. That'll tell us how it's going to work. Yeah, I'll just lay out the setup for you first. So the hardware, the demo runs on relies on the OpenLabs infrastructure. So we have the Intel Xion CPU and Intel Arial 10 FPGA on the software side. Of course, we are using OpenSac. In order to do that, so the first step is to use Glance to upload your FPGA image. It's the same as the virtual machine image. Just one caveat, you need to be careful with the metadata to describe it correctly so that you won't be confused with the virtual machine one. So as you can see, this is our FPGA image. Restart for Obama. So did you say Obama? Yeah. OK, wow. Wait for it. And you can see we successfully uploaded. So step two, you can use the Cyborg command to show all the accelerators, the hardware you have that you can program and use with. And as you can see, we have the FPGA card now, programmable and from Intel FPGA. The most exciting step, step three. With Rocket Release, Cyborg provide the functionality of FPGA programming. And now with the formal steps of getting you ID for your image as well as the device, you can now program the FPGA with the desired Bstream you want to use. Restart for Obama, of course. And after all this cloud environment done, in this demo, we are using OpenVINO, one of the deep learning toolkit open source by Intel as well, for the inference engine to do the model serving. So we're going to use OpenVINO for serving and just for a background. The same video we're using is a John Podesta, former chief of staff of prison, Clinton, walking with President Obama around the time of 2014 May. So Podesta is walking with Obama in 2014. Let's see what the AI can learn from that. Yeah, let's see what happened. Let's see the demo. Yeah, let's roll it. As you can see when Podesta whisper something, President Obama's emotion is quite complicated. Many times, I'm angry. So he whispers in his ear and suddenly he's like, we don't know what he says. But he seems a little upset. Yeah, I guess it might be the midterm that year. Got some bad news. Yeah, but I bet President is much more happier this year with this year's result. OK, so with FPGA acceleration, you can have almost twice the frame rate without FPGA and also cheaper price than using GPU, of course. The model we are using here is actually three models, one for HEPOs, one for facial, and one for emotion. All of them retrain on SqueezeNet, which is a lightweight model designed for the mobile device. OK, let's get back to the slide. Do you think this is cool? Yeah. Now, please remember, we only use one of the cyborg-rocky functionalities. If you think this is pretty cool, you won't see nothing yet. And of course, this amount of amazing work could not be done without a great community as any other OpenSec projects. We are very proud to have such contributors with diverse culture background, from diverse company to company that are killing each other on the market, that those people are working together, collaborating in OpenSec. I want to shout out to the Chinese developers, especially for the late meetings every week. Thank you. To all the contributors. And thank you to you as well, Howard. Thank you very much. That was fantastic. All right, I hope you all enjoyed that. Next up, to give out the Super User Award for this summit, is Jared Baker and a good friend of mine who happens to be having a birthday today, Allison Price. Come on out. Oh, yeah. Good job, man. For four years, we've been recognizing Super Users with the awards at the birth.