 All right, we're going to get started here. We've got a huge audience, so hopefully I can talk to all of you guys. But we'll try to keep this as quick and as exciting as possible, so we're going to talk a little bit about AMD C-Micro fabric computing and what it means to OpenStack. So I'm sure most of you know about AMD. C-Micro is a startup that was acquired by AMD a year ago. And what we do is we build servers. And what we've done is we've actually reinvented servers to address power, space. And we've also innovated on fabric computing, where we've integrated servers, storage, and networking in a unified platform. Now you might ask, OK, we're part of AMD. You're building servers. What are you selling? Well, we still sell AMD Opt-Around processors, and we still sell Intel processors. Our innovation is in the fabric computing and in the power and density. So what have we done? We looked at the traditional server, and we said, what can we do to innovate on this? There we go. That wasn't the innovation. This was the innovation. And what we did is we looked at it, and we said, how do we consolidate it to something that really matters for the scale out cloud environment? And we collapsed that. We simplified it to bring down the power and space. And this is our server. And we innovated with Intel Atoms, and we also sell larger power servers with AMD Opt-Around and Intel Xeons. So what do we do beyond this? Well, our server platform does a lot more, and this is what fabric computing is. We integrate compute, shared storage, and networking in the same platform. And all of this is tied together with a high-performance supercomputing fabric. And this is all made possible with our own ASIC that we built. And this is a key part of our intellectual property. So let's go over the system real quick here to give you an idea what it has. So from the server side and the compute side, in a 10-RU system, you can either have 64 AMD Opt-Around servers, 64 Intel Xeon, or 256 Intel Atom. In our roadmap, we're also going to be introducing ARM processors as well, too. From a density point of view, that allows you to put 512 CPU cores in a 10-RU system and four terabytes of DRAM in a single system. From a switching point of view, it gives you integrated data center switching. It's a data center switch in a box with 160 gigabits per second of uplink bandwidth. And this is where we really differentiate the systems. You don't see this in most blade systems, and this is really what makes us a fabric computing system is our integrated shared storage. It's really a sand in a box. You can provision storage on demand to any server within the system. Supports up to 5.4 terabytes. I'm sorry, terabytes is wrong. 5.4 petabytes of storage in a single platform. Ignore what's written up there. So really what we've done is we've taken a traditional couple racks of servers, 61 RU servers, with their disks, their top of rack switches, and storage. And we've collapsed that into a single fabric compute system. So let's talk real quickly about what this actually means to the OpenStack community, and where is this interesting. I'm going to start by talking a little bit about object storage. And in terms of OpenStack, that's OpenStack Swift. So if you were to go build an OpenStack Swift deployment, how would you do that today? Well, one way is you can take, let's see. I think this networking thing keeps jumping up here. You can take Cots, off-the-shelf servers, and switches. But what that means is you're going to have a sort of pool of servers, switches, and appliances, hundreds of cables to plug, inefficient space power, and cooling. It's really a complex, heterogeneous mix of equipment that you need to manage. It's cheap, but it's going to cost you a lot in OpEx to set that up. The other direction you could take is an integrated solution. There's vendors out there that provide that. It's delivered as an integrated solution, but it's going to be very expensive. And object storage, as we know, is a very cost-sensitive market. So how can we address this with fabric computing? And this is what we do is what we do in our system is we're able to collapse OpenStack Swift into a single-managed platform. Gives you the lowest cost per gigabyte, massively scalable, easy to manage, and it's built from off-the-shelf components. You can run the same OS as you need today. Nothing special is needed. So what is C microAMD able to deliver in this? So if we take a two-rack system, we've got the C micro SM15K here with storage enclosures that adds storage capacity that you need for your object storage. It's all managed by a single system. What we're able to deliver is five petabytes and a single SM15000. 2.5 petabytes per rack in terms of density, 64 servers to run your applications, to run your Swift nodes, your object servers, your proxy servers, and then you'll have some to spare to run other elements of your application, choice of Optoron or Intel Xeon processors, a 1.28 terabit fabric tying all these elements together, 160 gigabits of uplink bandwidth to connect to other services or uplink to the internet. And it's a single system to manage. And this is the real power of this solution. So now let's switch over to OpenStack. Let's go look at infrastructure as a service. We were just talking about storage Swift there. Now let's expand it across. Let's look at Nova Compute, quantum networking, and storage. So if you're going to set up this infrastructure yourself, you look at all the elements that are needed. You've got the servers that are running your hypervisors. You have an administrative network. You have your cloud management servers. You have security. You have monitoring. And then you also have your tenant-facing networks. All this infrastructure, you have to bring a lot of pieces together. And this can still be quite complex to set up. It could take you a while to set up. And our philosophy at C-Micro, and I think the real value we add with this converged network, is we're able to take all of this. And we're able to deliver it like this. All of OpenStack and all its elements delivered in a single platform. So let's look at the components in terms of Nova Compute. We give you the compute density, the energy efficiency, and simplified management. For Swift and Cinder, we give you a shared storage architecture. All the storage in the system can be provisioned like you would a SAN to any of the servers in the box, but this is all collapsed into a single platform. You get a lot of storage density. You can scale your storage independent from your compute. So typically when you buy your rack mount servers, you pick a server that has a certain size with a fixed amount of disk. You might choose later on that that's not the right ratio of storage to compute. And on our platform, you can very easily add additional storage without changing the form factors of your servers. And lastly, networking. We essentially have an integrated Layer 2 data center switch in the system. It's tied together with a 1.28 terabit per second supercomputing fabric. Each of the servers have 10 gigabit per second of aggregate networking. And external link uplink is 160 gigabits per second. It's essentially an integrated tour in a system. So more interestingly, where can we take OpenStack with our system? With this converged system, we bring together all the elements that OpenStack wants to manage, the compute, the storage, and the networking. And this is a little talk on the roadmap, and I think where the system is really compelling. Number one for compute. We'll be able to do VM provisioning today. But when you have a 10RU system that has 256 atom servers or ARM servers, do you really want to put a hypervisor on that? Use the same Nova API to provision those hardware servers as well, too. So we have a converged management stack. And on this management stack, we also can provide the Nova API. And now you can provision physical servers in the same way that you used to provide VMs. And this is something we're working on. The second piece, now let's look at Swift and Cinder. With Cinder, with the shared storage on our management stack, we can also put the Cinder API. And with that Cinder API, instead of using our own management system, you can use that Cinder API to provision storage for this complete solution. In terms of Swift with our integrated storage, we can also integrate the Swift object nodes. So you have the object servers and you have the proxy servers. The object servers can now be integrated into our storage system so that instead of provisioning separate standalone servers for your object nodes with direct attached disks, our shared storage system with its attached compute can now be your Swift object nodes. And that will give you a higher degree of integration. And you'll use less compute to provide that solution. And lastly, if we look at quantum and SDN integration, the fact that we have a Layer 2 switch and we provide the switching across our fabric using our ASIC, we also have 100 gigabit per second NPU on each of our networking cards in the system. Through this, we're going to be able to provide innovations in both your virtual and your physical networking. And really the key value there is it ties all these pieces together. So in a single platform, you have your Nova API, you have your Cinder API, and you have your Quantum API. So it really gives you open stack in a box. And you have a lot of bandwidth when you want to start scaling out and connecting multiple systems together. So I thought I'd leave it short and sweet. Couple minutes if anybody has any questions. Thank you very much.