 Hey, thanks for that introduction Francois. Can everybody hear me okay? Is this mic loud enough? Yeah, okay, cool. So um, so Steve and I are tech writers. We actually work on this product called Red Hat Enterprise virtualization and It's KVM based virtualization system, but what Red Hat's now starting to focus on is expanding out of just sort of infrastructures of service virtualization into a more comprehensive suite of cloud products So as part of that effort, we're currently working on Delta Cloud Which this would never happen on Delta Cloud, it's really stable Aha, okay, cool There we go. So this is the contents of our talk I'm just going to run through to begin with an introduction to Delta Cloud and and go over basically the the principles and Goals of the project and then Steve's going to talk about the implementation in detail and go through a live demo So I always like to introduce technology in terms of solving a problem There's no point just having technology for technology's sake. What problem does this solve? The problem that we have or the emerging problem that we have is that cloud environments are heterogeneous People have a mixture of internal and external clouds when you talk about things like cloud bursting You know the idea that you have an internal cloud and then a peak load your cloud burst your load out to Amazon or somewhere Well, your cloud is not Amazon cloud. These are heterogeneous environments So you also need to be able to deploy workloads to different cloud environments I mean it's interesting scenario that that was in the last talk that you might have disparate Dev and ops environments So you could be using one cloud for dev another cloud for ops and you need to be able to deploy workloads around that So in terms of solving that problem, we want to apply some principles It should be free as in freedom and it should be based upon open standards. The idea here is because If we as a vendor try to come into this and say we own this problem We all we own the solution to this problem and you have to do it using our piece Which is proprietary and closed then it just leads to vendor lock-in and one of the big concerns that people have with the Cloud is that it's going to it's going to lead to vendor lock-in. You know, you won't just have something on the cloud You'll have something on the vendor X cloud and you'll be stuck there. So we want to design against that proactively We want to design something where people choose our solution because it is better and it is cheaper and it has better support Rather than the fact that they've been forced into it and then five or ten years later. They hate us So how do we do that? Delta cloud is provides one unified API for all of the basic lifecycle activities of infrastructure as a service cloud so By providing this single common API which is which is published as a rest API and there are language bindings for all different languages People can develop against a single common API. They can develop scripts. They can develop GUIs whatever tools you want to use for management of a cloud and And then each of the individual actual implementations of the cloud have a driver which performs the the translation between the delta cloud API and their specific implementation so currently we have adapters for Amazon EC2 for red hat enterprise virtualization for a whole series of different cloud corporate whole series of different cloud providers So we also have our own reference GUI, which is called a all this conductor So this is a reference. I guess consumer of the API which turns it into a web-based GUI So the components of that you have the oldest conductor, which is the reference GUI Delta cloud inside it and then Delta cloud speaking to all of the different cloud engines You also have a couple of other components the image factory in the image warehouse So this is you know, what what what am I spinning up on the cloud? Well, you need to be able to produce an image that you're going to spin up You need to be able to keep a warehouse of those images. So we'll go over those in more detail Steve Hey, how are you guys? Hope for my close enough So I'm going to talk about the core and the implementation and also Specifically the conductor part of the ALS project, but I'll touch on the other components as well just to explain what they are As Dave was saying it's a rest API the Delta cloud core It's implemented in Ruby on rails and makes use of DSL as well So that's the core drivers, but obviously being a published rest API You can have bindings in any language you want and there are example bindings in C for example And it's also self-documenting and you can go and write bindings for Java or whatever else as well So when we say there's a driver for each cloud provider that supports there's a driver which runs on your server On ports 3001 onwards basically and each driver is just a Ruby class or an implementation of a Ruby class which provides the code for talking to that specific provider and implementing all of the common classes and functions It's got backward compatibility across API versions, which has recently been confirmed So that means if you write against one API those functions will be preserved in future versions of the API Although we may extend it obviously and add new functionality as new things become available So the core concepts and each of these is effectively a class within the API We have hardware profiles Which as you'd expect pretty much defines the memory and CPU allocations that the VMs can run on We have realms and the concept of a realm differs depending on the provider So for EC2 for example, they're more obvious than others So you've got US West, US East, Asia Pacific, etc. depending on where your accounts set up An image is basically your VM so it's not necessarily Serially running, but it's the image that you may put on any cloud you want at some time And then there's instances and instance dates So if you have an instance running on EC2 or on rev it can be started stopped, whatever Storage and networking are kind of concepts that haven't been defined in the API at the moment, but they're working on them now So provider support, that's the current list we've got obviously vCloud and eucalyptus still work in progress but Basically the idea is to support as many of the Public clouds as we can and any private cloud implementations as well to support what David was talking about Which is the ability to move your workloads around wherever you need to as time passes So for compute providers, we can do all the things you'd expect with an instance, create, start, stop, reboot and destroy and then obviously for each of the Different objects like hardware profiles or realms You can call without parameters to get a list or with a parameter to get details in a specific object As I was saying the storage and networking is part of the API that's now expanding And so they started off with Amazon S3 support and support for cloud files There's also heavy work in that area to get the other providers or the other major providers supported too And similarly the actions you'd expect on storage to be able to create update delete Both blob containers and individual attributes within the blobs So as I was saying AOS is a project is an umbrella for a number of cloud-related projects So the one we'll be demoing today is the conductor which as David was saying is a UI or basically should be treated as a Proof of concept really of how we expect all of this to work in the future Some of the other components were supported are OS, the image factory, image warehouse and auditory And the way they fit together is like this effectively So if you think of the AOS project and the conductor is the larger box The image factory, OS, the image warehouse and auditory are all things that interface with the cloud provider in the in behind scenes to Make it all work, but effectively if you only need one of those components for your particular Business need then you can just take them out and they work quite fine separately as well So the idea is that the image factory allows for the building of images The image warehouse allows you to store them in an intermediate data store before you push them to a cloud provider and Audary provides for runtime configuration of instances once they're up and running in the cloud So how do you get it? The demonstration I'll be doing will be on a Fedora 13 VM But there's RPMs for Fedora 13 and 14 as well as rel It's expected to be included around about released 16 of Fedora at this stage So it's not going to make the schedule for 15 which is closing very soon. I think Because it's in Ruby, it's been packaged as Ruby gems It's probably the easiest way to install on Debian or any other distribution at the moment Alternatively, obviously it's all open source and you can get it at the locations providing the slides And I believe the slides will be on the wiki shortly I fear to grab that Alright, so I'm just going to skip out now and hopefully provide a bit of live demonstration of the tool So here's one I prepared earlier So what I've got here, I've got running on port 3001 I have what we call the mock driver and basically that's a driver that provides all of the functionality of the API Or exposes all the functionality, but it doesn't actually do anything in the back end It's basically it's not connected to a provider So it's just faking at being a provider so that you can do your testing in development without having to have an account connected or anything like that And then on 3002 I've got my API On my driver run against Amazon EC2, which is what I'll be demonstrating in a minute So on the API page you can browse through it. It's got so it'll ask me for my credentials Just just to clarify for everybody. This is actually the rest API This is not the AOLIS conductor GUI. This is just using a browser to directly access the rest API Yes, I'll demonstrate the conductor separately in a minute So the mock driver just fakes that here's a couple of images what they would look like what the attributes look like Again just for your development and testing If I go back to the API page I believe it's also got a number of instances and then with those I can start the stopped instance And then it'll give me both the inputs that I provided and that gives me the updated state of running as well as the information about that particular instance And then obviously I can stop it again, but again, this isn't connected to a provider This is all just demonstration for development and testing But you could also do these calls obviously through your own client if you write one using whatever language bindings you chose And again just provides the realms the structure and some examples So if I jump across to the cloud engine Which is basically the conductor Still some rebranding to do because this project was renamed very recently So I'll just log in. All right, so this is the dashboard for the conductor At the moment all these statistical images down the bottom are actually static, but I'm going to jump into system settings So you've got a number of panels in here some of which are implemented so much and not The main ones to take note of are probably managed providers and managed users So in provider management, I've already added my EC2 account and you can see that's connected to the API I was just looking out on 3002 And then obviously you can add as many providers as you want if you have a driver running for Rackspace whatever other cloud provider you can connect to those as well at the same time And I'll demonstrate how that's useful in a moment So the workflow basically I've already got a template here But the first thing you do normally if you connected on your provider was create a new template and Then currently it's only set up to work with Fedora But thanks to the change to the image warehouse and image factory and OZ components They're hoping to expand that list quite a bit Basically what happened was it used to use box grinder, which is very RPM driven and it's moving away from that to make it more dystro agnostic With the benefits that obviously provides But at the moment if you choose to spin a template gives me a full package list I can select either groups like base or x-windows system or I can drill down into these menus here and select individual packages to be built into the template so If I jump across so There so if I select a template from this screen and Then go build What you'll notice down the bottom here is that this is the point where you choose your provider So the idea is that you can use that template that you created previously in your package selections And you can deploy it to whichever provider you want without having to change the template itself It actually doesn't the actual template stored on disk isn't the built image as such for a specific provider So we try and keep the templates provider agnostic as well And if I jump across to instance management, that's where you can see actual running instances or stopped instances That's where you can watch an instance Into the cloud basically So you can see the obvious implication of this for cloud bursting that if you had a template which defined a node in a cluster or whatever Then and you wanted to cloud burst it out to a public cloud You just take that template build an instance that was going to a public cloud and spin it up Yeah, so what's happened there as I've added the new instance at the top here and in the background I'll be talking to AC to and pushing it into the running state and then eventually I'd see it in my management console Over on the right, but we'll probably just continue on Yeah, so Steve mentioned before with the GUI it was being rebranded from red hat cloud engine to to a all this conductor The story here is that as I said I all this conductor is actually the reference implementation of a GUI that consumes this API Red Hat cloud engine is the product that we are aiming to produce around this I mean you may or may not know the red hat model of producing products is that we get actively involved in in Targeted development of open source products and then we assemble those together into a product that we want to release and support So we're actively directing our efforts into Delta cloud API and I always conductor and then ultimately we're going to build our own GUI which is which will be the red hat cloud engine, which is the product that you can purchase support for etc So this whole thing is actually part of a extremely simple and clear cloud strategy is depicted in this diagram requiring no explanation but Basically, basically the idea is that that we have down the bottom layer there you have Infrastructure as a service so we have red hat enterprise Virtualization as as something that you would implement within your own organization to produce a cloud or people of course we use V cloud or v-sphere From VMware, but then there are also all of the public clouds So we want to provide this layer on top the cloud engine the Delta cloud API to enable you to seamlessly move templates and resources between all of those clouds So there's more information on all of this stuff at the Delta cloud project is is now in the Apache incubator And as they are all this project and Finally red hat is hiring so if any of this stuff sounds interesting to you guys We're hiring all kinds of positions to write about this technology. We're hiring developers for the Delta cloud And a whole bunch of other Roles so if anybody is interested in in working for us, please come and see us off to the top. Thanks at any questions Right. So yeah, that's a really good question actually My understanding is that we are aiming to produce sometime in this calendar year that a red hat cloud engine product Which will be something that you host yourself, but we are also aiming. I think it's scheduled more towards 2012 To produce a hosted version of this so that you can just consume the abstraction layer as a hosted service But the initial product will be something you host yourself. Yeah