 Okay. Hey folks, good afternoon. Thanks for joining us and thanks for all being ARM users too. Whether you know it or not you guys have probably interacted with multiple ARM devices today from tablets to phones to screens in cars and things like that. We've had a really good show so far. People have been coming over to our booth which we've located really convenient for you right there. B29 and the conversations have gone from, well this is kind of cool, ARM's here. We've been expecting you. This is great. Other people say, wait, ARM at OpenStack but why are you guys here? So we've had some great dialogue. We want to give you a little bit of insight on what we're doing with OpenStack, why we're here. All open with a couple of intro slides and then I have Martin joining me from Leonardo and he's going to go through some more specifics on the engineering front. What we've done to enable OpenStack and how we're actually deploying and using OpenStack to power a developer cluster within Leonardo. So with that let's jump in. ARM gets used as a term for multiple things. We're essentially a company. As of last month we were acquired by SoftBank. ARM is also an architecture. So just like there's other instruction sets, ARM is an architecture itself. It's an instruction set. It gets referred to in that respect. And then ARM is an ecosystem as well. How we come to market as a company. We deliver intellectual property to companies who develop chips, boards, et cetera. So it's really important for us that we operate in an ecosystem and partnership way because that's how we're successful through our partners. The architecture itself has evolved over the last 20, 25 years. We have a 64-bit version of the architecture that was introduced a couple of years ago and that's really where it became applicable to a broader market, encompassing networking servers, more infrastructure, higher end solutions with many cores, higher performance. We boosted the floating point units, things like that. And then recently we announced some additional extensions that will be useful for the HPC market called scalable vector extensions that allow us to have flexible width vectors. Our vision for the use of ARM. We have something called the Intelligent Flexible Cloud that's kind of formatting funny. Do you want to, is it in full screen mode? And the reason for this, we believe that you're going to have compute in the right location where it's going to be used and different combinations of compute, storage and networking, depending on whether it's in the data center, throughout the network, all the way to the edge, the base stations and things like that. ARM as a technology is already pervasive in the network. We're starting to see that ramp in the data center and that end-to-end solution is going to be really important because people need to have the compute and be able to make those decisions closer to where the data is being generated. I mentioned we operate as an ecosystem, so no one person will make a great business if they try to do it on their own. This is really fundamental to the philosophy of how we operate. It maps well to how open source operates. And we really believe in this. This is how we're successful when we make our partners successful. Just wanted to update folks on a couple of recent OpenStack and ecosystem-related announcements. So last Monday, Canonical, who are the creators and sponsors of Ubuntu, announced in addition to the Linux distribution that they've had for ARM for the last couple of years, they also offer OpenStack and SAF as a commercially supported solution right now. I won't go into details. You guys can go and read about that at your leisure. Just last night, actually, so this week in Santa Clara is ARM TechCon, so that's our annual technology conference. And the Suzy guys announced that they're now offering their Suzy Linux enterprise and Suzy Enterprise storage solution for ARM as well. And Red Hat, the other kind of major Linux distribution, has a developer preview of their Red Hat Enterprise Linux as well. So they're making that available as an early access solution for people to kind of kick the ties and experiment with ARM servers as well. So a lot of how we operate and enable OpenSource is through an organization called Lanaro. And Lanaro is a not-for-profit engineering company that ARM and several of our partners established for five years ago. And this is where we combine resources to work on common engineering. And it's an engineering company that that's all they do. Yeah, there's a bit of project management and things like that. And their whole goal in life is to contribute OpenSource software to the upstream on behalf of ARM and its partners. And so with that, I'm going to hand over to Martin from Lanaro who's going to take us through some more details on the specifics of OpenSource that we've done. All right. Thank you, Jeff. All right. So how many familiar with Lanaro? One or two? A couple? Good. That's great. That's excellent. All right. So we have sort of different segment groups there. The first one that started was a mobile group. So we have Digital Home and then we've got Networking. And then the group that I'm responsible for, which is the Enterprise Group, which really focuses on what we're doing here. So the Enterprise Group, what do we do? These are the members who we have today, so ARM. Obviously, we've got Alibaba, Facebook Cavium, Red Hat, Qualcomm, HPE. So a good mix of both consumers of the content, those that are actually building the SOCs and then some who are actually building the servers. So a nice sort of spread across that. What do we do inside of that group? Again, as Jeff pointed out, it is a sort of a common area to do development. ARM server has lots of work that needed to be done, for example, to get UEFI enabled first or ACPI for the Linux kernel. So this is something where we do it together. It's shared. So very much focused on the boot architecture. We do a lot of optimization and ARM. It doesn't matter where. So we've done it in Ceph. We've done it for OpenJDK. We do it in Hadoop. So across the board where there's x86 assembly code, maybe we go in and we do the same thing for ARM or we rewrite it in some way, shape or form. There's a lot of different stuff out there. We do a lot around software-defined infrastructure. So OpenStack, Ceph, DPDK, because this is where the industry is going, right? I mean, you all know you're all here attending this event. And so one of the other things we do is we provide as a service something that we call the developer cloud. It's very simple. It's just a developer cloud by developers, for developers, so that if you're interested to start developing porting testing for the ARM64 server platform, you have a place to do it. Because today, if you wanted to go and get a board and do some work, it's a little challenging, right? As the new systems are starting to come in, how do you actually get early access to it? So that's what we do. All right. So what are we actually doing as far as OpenStack inside of Lenaro? And remember that the engineers who I have on the team are all provided by, they're not Lenaro employees. They're either assignees that are provided by the membered companies who belong, so the list of folks who we had, the Caviums, HPEs, whatever. So I've got five employees, but we have 50 engineers working on it. So what are they actually doing here? Nothing really different. We don't have special sauce. We're not trying to make anything competitive. We're not doing value add. We're taking the standard OpenStack that you get, and today it's Newton, and we're building that on ARM, right? So that's a lot of patch code fix work that we're doing to get things to do it. Sometimes you might be tempted to make a little shortcut when you have to adjust from x86 over to ARM. We don't do that. What we'll do is we'll actually go and do the patches, submit them upstream and get them done, so the community, any of the distro vendors, any of the hardware vendors can actually just consume OpenStack the right way. All right, so how do we do this inside of Lenaro? We chose two operating systems that are community operating systems. We're doing this because we're very agnostic in who, there's no priority on the SOCs, there's no priority on the distributions, and so we picked two that are pretty common, Debian, some of you might have heard of that one, Sentos, some of you might have heard of that one too. So we do this OpenStack packaging for ARM64. The links that are in here, and I'll get these slides out to anybody who wants them, they actually then go to our Jira, our engineering Jira, and you can go in and look at the work that we're doing. So for 1606, it was Mitaka. We did do a release, so you can actually go and pull those packages, and if you do have ARM servers, you can go and install that. Newton, which is our 1612, so 2016, the month 12 for December. We've already built this, it's running today, but we'll release it with 1612, release of this either Sentos or Debian build. So it's there. And in addition to what we do for OpenStack, there's also Ceph that we've brought in. Our list of images that are available, so the cloud image is available, is growing. We've got zeroes that everybody has. We've got Debian, we've got Sentos. We do now have Ubuntu, and we also have CoreOS, and Susie will be in here relatively soon. So again, we are not choosing a distribution. We are being agnostic about it, so the more cloud images we can get in there, the happier we are. All right. Yep, good. All right. So some simple things. Nova, I'm just going to talk a little bit about Nova and Neutron. As you know, the work that we do, we really are, what are you doing? I was trying to get it in presentation mode. How about it? You had it earlier. View, no, no, no. There we go, slide show. And we're back. Okay. All right. So the two areas that we really focused on, remember with OpenStack, there's how many projects now? Almost 200. We focus on the core definition of OpenStack. That's what we do, def core. So the work that we're doing is this thing is like doing its own thing now. Really? By the way. Yeah. All right. So for Nova, we've done a lot of work around that. Sorry about the problems here. But just getting things to like launch an instance, shut down an instance, the guest status, image storage support, boot from UEFI. All of this work had to be done. Someone had to do it. And so Lenaro did a lot of this work and then is pushing that all upstream. Neutron is another one where we've done a lot of work in the areas. Things that we're still sort of doing. So the core services agents are validated. So it was Open Daylight plug-in. We validated that. We've got a lot of work that we're doing now with OVS plus DPDK. So that's important to us. I've got developers working on DPDK for ARM so that we get the same benefits that you would get on a different architecture. This thing's lost its mind. So I'll just roll with it. All right. Seth, for us, it's very important, the developer cloud that we run. I'll talk about that in a minute. It's all Seth back end. So it's a big thing for us. So we're doing a lot of the block storage and object storage. We have not gotten into Seth FS yet. That is coming. We'll start working on that January for every time frame. Again, Seth RDB integration with OpenStack is finished. The ODS integration on OpenStack is ongoing. There's some tough things that we have to work out. We're working on it. We're doing this publicly. It's all in the open so you all can go and look at it. And now we're sort of really looking at Seth performance testing and we're going to do that with the Seth community. So work very closely and again publish all of this information so all of you can see it. This is for those of you who are really interested about the patches. We just sort of listed some of these out. Obviously you can go and grab this and take a look. But this is sort of the work that we're doing, you know, in the different areas. We do have ironic, basically complete at this point in time. The Nova works going well. Neutron's going well. There's still things that we have to do. We know, for example, that we can run the containers in Newton. We have not, we've played with it, but we haven't put real CI and QA into it yet. So we're holding off and saying this is stable and capable for everyone to use. All right. And so why is this all sort of, what are we trying to do for an end game? We have this thing called the Lunar developer cloud. So we've got one location in Austin, another one in Cambridge, and we're setting up another one in Shanghai in China. And this is where you can go and sign up for the developer cloud, get an account, you get some virtual CPUs, some memory, some storage, and you can do whatever you want with it, right? So you can get eight cores, let's say, you know, 24 gig of memory, and you can carve that up however you want. It's just the standard Horizon dashboard that you're going to get, and you can go in and start developing software. We want companies or individuals who are developing software to use this. This is not for you to start running, you know, your blog or something on. This is really meant for us to do it. We run out of capacity. Every time we add another 20 or 30 servers in, three days later we're out. We just can't keep up. So make sure it's something that is really going to contribute to it. All right, reference architecture. We publish a reference architecture. It's out on Lenaro Git. We chose to use Ansible because we felt that it was one of the most common ones out there. Certainly we could have done Chef or Puppet or any of the other ones. We chose Ansible because we just think it's very easy to use. So we do have this reference architecture. We do have documentation around it. It's an opinionated view of how to deploy OpenStack on ARM servers. We're not saying this is the gospel. It's just our opinion of what we think and what we know works and how we do it for testing. That's it. We'd love to get feedback on this. So if you go and pull it and then you actually find things, have comments, find bugs, anything we want to know. Again, Lenaro is a company that just does software development. We don't have a product. We don't sell anything. We don't have any customers. It's a pretty good job, I got to tell you. All right. As some of you saw on the keynote today, and I do need to get the shout out for who actually took that photo, so I forgot to add it in, but I'll get that added. This came out on Twitter while we were doing the keynote. One of the fun things here is we were able to successfully go through Interop, which is a big deal for us, right? No one's done this yet for ARM. We're able to go through it, so it's fun to see a command line there where you got a U name minus A and it says Arc64. So that's pretty good. A lot of work that everyone did to get to meet the timetable for this summit, so we're happy about it. So what is it that we're doing? So we're doing Interop for OpenStack. We are going to do it. We want ARM to be a tier one architecture for everybody to consider to use. So that's why we're going through the effort of doing it. So we do belong to the Interop working group. We do go through the RefStack project. We do have Cloud User. We've got workloads. ARM has provided us with hardware dedicated, so we're actually to get that strapped into the whole third-party interop testing also that you guys can consider ARM for some of your deployments, either, you know, Greenfield or maybe Brownfield where you bring in some ARM compute at the same time. All right, so this is the landing page. If you want to go to developer cloud, it is a very trendy lanaro.cloud email address. No www in front of it or com at the end. But again, it's for anybody to do, sign up to. This is who we've had sign up. So this is old. This is about a month old. We've had 370 requests for access, valid requests where it's developers who actually want to get access to it. If you look at it, that pie chart is kind of interesting because of how it's distributed between North America, Europe, Asia. It's a third, a third, a third, pretty much. Pretty interesting work out there. All the major Linux vendors have access and actually have instances on it. We've got a lot of interest for mainland China. That's where we're setting up a Shanghai developer cloud area. Lots of the universities. Interesting folks like Airbus, IBM, AT&T. It's really interesting. We've got both IBM and Intel who want to access to this. Who knows why? But I'm happy about it. And then some of the folks that are actually doing stuff on it. The Linux foundation uses it for Hadoop. Red Hat's using it for Seth. We've got MongoDB using it. Debian is using it to build their OpenStack packages. CERN for HPC. And now OpenHPC through the Linux foundation as well. So that's sort of a mix of who we have. All right. And then just to prove to you guys, because you always need a live demo, right? If you look at this, you'll see this is just vanilla, Newton, Horizon. We themed it a little bit. Come on. You always have to do a little bit of that. But it's the real deal. And it's running. And it's not that ARM servers are just running compute. It's for all aspects. It's for the neutron controllers. It's for it. The whole thing. So glance whatever you want. It's all running out here. So let me see if I can actually launch one. You think it'll work or you think I'll fail? Photo hands of fail. Photo confidence here. Okay. One guy believes in you. One guy believes in me. I think I've actually lost network connection anyway. Oh, wait. All right. I got instance. Launch an instance. Demo. Let's call it demo. Sure. And I got to go through the whole thing. Where's your source? Where's vanilla? Is that one vanilla? Yeah. Yeah. That's boring. Come on. Everybody knows that one runs. Yeah, we know that works. That one works. What else do I need? I got to walk through. This is just painful. So I'm going to do that. I'm going to launch instance. Let's see. And I'll go do whatever. Anyway, we've got about 38 seconds, but this is something running on ARM servers. It's core OS. We thought it would be kind of cool to do one of those. Next time we'll launch a snappy, snappy system. But the systems are there. We'd love for you guys to get access to it. We'd love to get your cloud images that you can build and put in there on your own. We'd love any kind of feedback. Go to Ansible. Pull the playbooks. Go to Git. Pull the reference architecture. Do whatever you guys want. And let's just try and make OpenStack for ARM better and better. All right? Yeah. Great.