 Ladies and gentlemen, please welcome Senior Vice President of Strategy at Rackspace, Jim Curry. Hey. So I'm really excited. I'm going to do a quick intro of a couple of people. We've got a topic we want to talk about that we have been investing heavily in since we got OpenStack started in 2010. Both training, helping people learn about OpenStack, helping people get involved with the community, helping them learn how to deploy it and use it. And we're going to have Tony Campbell, who runs our training program and has been working in this area since the very beginning, come talk about some of the stuff we're doing in training and trying to help get the community up and running, not just here in Hong Kong but around the world. And I'm also excited to have Jonathan Prue, who is a Senior Technical Architect for MIT's Computer Science and AI Laboratory in the United States. He's a very smart guy. He's going to talk a lot about what they're doing with OpenStack and what they've been doing at MIT. He's also a great community member. He's a role model for what we like to have within the OpenStack community. He actually contributes heavily back to the documentation and in fact was one of the co-authors of the OpenStack Operations Guide and puts a lot of time and contributing back in that way. So without saying a lot more, I'd like to invite Jonathan to the stage to talk about what they're doing with OpenStack at MIT. So Jonathan. Thanks, Jim, for that wonderful introduction. I'm Jonathan Prue, Senior Technical Architect with MIT's Computer Science and Artificial Intelligence Lab. And I do have slides. Good. We're one of the largest independent research lab within MIT. We have a lot of people doing a lot of different things and that's where we differ from a lot of the people you hear about at the other talks here today. We have a lot of people doing a very wide variety of things and we need to be able to change what we're doing very quickly to meet our research needs. So the size of our deployment is nothing like what you're getting from folks like Rackspace and CERN. We have one really dense rack, 768 physical cores and our initial deployment strategy was to get a cloud on this as quickly as we possibly could. We had researchers coming up on a deadline who needed a thousand virtual systems in order to test their research. We took Ubuntu and Puppet, which we use internally for our other systems and we were able to turn around a deployment on this hardware in about a week using the community-supported Puppet modules and mostly the default configurations that you see in the documentation. We varied a little bit from that using Nova Networking HA multi-node setup. This was in July of 2012. Just over the summer we moved to Grizzly and went to Neutron Networking. We did a very big rework of how we were doing networking behind this hardware so that the VMs were plugging directly into existing VLANs in our data centers. This is going to help us migrate some of our legacy workloads. We're looking forward to Vana. Since people are relying on the system much more heavily, we're going to have to delay that until the end of term, which is December when utilization is a little bit less than it is right now. Oh, forward, please. I'm going to talk mostly about what our users are doing on this hardware. If you want to know how we set it up and why we made the choices we did, I'd be happy to talk to you afterwards. As you can see, even without very much internal advertising of this resource, we have 38 different projects set up on this with 114 different users. It may actually be more than that since once the projects are set up, the primary investigators can have more people join without any intervention by myself or my team. We've run a lot of hours on this, and I picked just six of these research projects to focus on based on who used the most in the month before this talk. So the Alpha Group was the first group that we had on the cloud. They are the ones who had the immediate need, and they had been using public cloud resources, but they needed to do a lot more than they had funding for, and they also liked to turn around their VMs on a very short cycle, much less than an hour. So for public clouds that bill one hour minimums, that was really something that they couldn't do. They're working mostly on machine learning and knowledge frameworks. All these slides have URLs at the bottom if you want the really deep research details on what these folks are doing. The Text Retrieval Conference is knowledge-based acceleration competition, is another one of our big users. They tend to run more steady state than the Alpha Group does, so they have a number of VMs that are running more or less continuously. MIT's team is the organizing team for the competition. There's a number of different groups across various universities. I believe they're all in the US, but I'm not certain of that. They have a large content stream of texts that they need to go through and apply their learning algorithms to, and they generate a knowledge base, which is something like Wikipedia. So they have a big stream, and they need to process that into something meaningful as output. Ubic is a new research project that has been accepted for publication, but isn't quite out yet, so if you want to watch Daniel Sanchez, who is the primary investigator on this, he has a website there for when it does come out. It's a hardware and software system that is dedicated to working on the problem of systems in the cloud that have high latency sensitivity, working alongside systems that are more throughput based and are not as sensitive to short-term fluctuations. So this is something that will make all of our lives easier once it actually is proven and worked out into a more production rather than research world. We can run interactive systems alongside batch processing systems and have them do the right thing on their own. Our network and mobile systems group has a few different things that are going on in the cloud. They're doing machine learning algorithms and student congestion control algorithms based on corpuses of mobile network data that they're trying to improve how our mobile networks for our phones and things work. One of the future ideas that I've been talking to them about is research on intelligent VM placement in clouds so that machines that are talking to, virtual machines that are talking to each other can be moved closer together within the network to reduce contention within the data center. Learning intelligence systems group is another one of our research groups that's working in yet another area. They're doing artificially intelligent robots. One of the robots that they're a little bit famous for, you can find pictures of them on YouTube, is their cookie baking robot. The researchers in this group are working on object recognition systems so that vision systems like what's in this robot or what might be in other applications can identify the object that they're looking for. So they have a number of systems that sit in our cloud and run different recognition algorithms against the visual data from the robot. Julia is a programming language that was developed in my lab at MIT. It's a very high level language with explicit parallelism so it can run across multiple cores on a single node or across multiple nodes in a cluster. It's now an MIT licensed community project that you can install on your favorite operating system. The group that developed that is serving an iJulia cluster on our cloud. So they have a number of front-end systems and a number of worker nodes behind that. And students are using this for a number of mathematics courses at MIT now. And this is why I have to wait until I end the term before I can go to Havana. So we have a number of areas of future work that we're looking into. Since we have this cloud platform that we own, we have the hardware in-house and we have complete control over that. We're able to use the infrastructure as our research platform. So groups like our networking systems who want to look at scheduling and traffic flows underneath that cloud have something to work with. And one of our challenges is trying to provide a stable cloud environment for people who are working on top of the cloud, within the cloud, as well as providing researchers access to the underlying systems and using things like host aggregates or availability zones to provide sandboxes for them to actually start working on these lower-level components and thinking about how those can be improved. We're also moving in some of our internet-facing applications and consolidating our legacy virtualization environments onto the cloud, which is, you know, more of a standard thing that people do. It's not very exciting, but over the past year we've seen that in addition to being a very exciting research platform that lets us rapidly adapt to the changing needs of our researchers, it's been a very stable platform, so we're able to bring in our production workloads and deprecate these older virtualization systems so that we don't need to support so many different environments. We have a number of researchers who are not able to make use of virtual systems because they need access to the very low-level registers for the actual real hardware. We have a large computer architecture department who are working with hardware. They want hardware. GPUs are another area that our researchers have some needs for which aren't really provided for in the default open-stack world, so we're looking at what people are talking about in some of the PCI pass-through or bare-metal provisioning again to provide GPUs within the cluster. We just heard a wonderful presentation about the things you can do with desktops in the cloud. Again, we don't have quite the scale, but we do have some ideas of putting these desktops into the cloud so that people can have actually more computing resources. If you have a laptop with maybe two or three cores, two or four cores, we can provide 12 or 24 core workstations within our cloud that will put a lot more computing power behind the students' resources. One of the more interesting things is one cloud per student which will be able to provide quota sets for individuals that aren't tied to any specific research goal. In research and in any type of development environment, providing an area for people to play around that has relatively few risks. It's just their own sandbox. You can get some really amazing creative ideas out of that. Being able to provide just this blue sky green field environment to our students to just do whatever they want and see what comes out of that is definitely one of the things that I'm looking forward to seeing. That's what we're doing at MIT these days. A little bit about where we're thinking about going. I'd be happy to talk to anyone afterwards about the deeper details. This is a pretty high level. I think we're going to bring up Tony now to talk about what Rackspace is doing with training in the academic space and Tony. Thank you, Jonathan. All right. Really excited about what's happening at MIT. Good afternoon, everyone. Are you guys alive? Good afternoon, everyone. All right. There you are. It's good to see everybody. My name is Tony Campbell, Director of Training and Certification at Rackspace. So I've had the privilege over the last year of training people all over the world, including a group of students at MIT. At MIT last January in their IAP, that's Independent Activities Program, we were able to go out to MIT and train up to 50 students on OpenStack. We introduced those students to this awesome technology that this community is building. And I'm really excited about the things that they are using OpenStack to do. How about it for that cake-making robot? I hear there's another robot that makes coffee. So there's some real cool things happening. But in addition to that, it's awesome just to see the impact that OpenStack is having all around the world. OpenStack is enabling academic research. Now through OpenStack, we're driving efficiency in research labs all over the world. OpenStack provides a low-cost performance compute resources that these schools and universities and research labs can use to extend their research capabilities. We are equipping computer science engineers with free and open cloud operating systems. That is all work that this community is doing. And I think for that, we ought to give ourselves a round of applause. So how do we accelerate this? How do we get more universities using OpenStack to do their research? How do we get more students introduced and involved in OpenStack? Well, that's one of the things that RackSpace is really trying to help out with and be a leader in the community in this area. One of the things that we've introduced over the last year is what we call the RackSpace Academic Seminar for OpenStack. What we will do is we will bring our training team out to any university that qualifies, and for that university, at no charge, we deliver a four-night seminar. During that seminar, students are able to get access to our master instructors, and those instructors are able to teach them about OpenStack everywhere from installation to operations to architecture. We talk about the flow through networking, how to spin up VMs, spin them down, attach storage, everything you need to know to leverage the OpenStack platform. We're able to introduce students to this new and exciting technology. So it's a four-day seminar. It's on campus, so we bring the seminar to the student's location. We're on campus for four nights, three hours per night, and we give exposure to all the different OpenStack projects. The first one was at MIT. Right now, as we speak, we're at a university in Texas back in the United States, and there's several more universities that we'll be visiting this year. If this is something that you are interested in, stay tuned. I'll have a link for you where you can also sign up. Some of the other universities that we've been to include University of Carnot Word, Oregon State University. We've also gone out and done training in Australia for the Nectar Project, and our hometown university in San Antonio, Texas, the University of Texas at San Antonio. So for those of you that want to join us in this mission or who may want to take an opportunity to have the Rackspace Training Team come out and provide the seminar at your university or research lab, you can do one of the following things. You can join the OpenStack community high-performance computing lists, high-performance computing mailing lists for more information about HPC in the community. We also have an OpenStack academic initiative. It's been kind of quiet lately, but we like to see that pick up again, so we encourage everybody to join that list and join that group and begin to participate there. And then finally, if you want to bring Rackspace out to do an academic seminar at your university, you can look us up at training.rackspace.com slash academia. In addition to our academic seminar, we have several classes that are designed to get you up to speed on OpenStack quickly. We have an OpenStack Fundamentals course. We have a Neutron Networking course and a Hadoop on OpenStack course, and also a Building Cloudy Apps course. We're also pleased to let you know that we are beginning an on-demand training program. For those who may not be able to travel to one of our courses, you will be able to take our courses on demand in the comfort of your own living room. If you're interested in any of that, you can find information at training.rackspace.com, or come see us at our booth over in the Expo Center. We're real excited about what's happening in OpenStack, in particular in the academic community. We really believe in OpenStack and the magic of this community and everything that is coming forth from it. We encourage you to get students, if you know students, our faculty members, our researchers who are not already using OpenStack, tell them about what we're doing, point them to the website, and let's come together and change the world. Thank you very much.