 How many of you are familiar with GigaSpaces, Clarify, heard about it? None. So basically, what I'm going to do is go through a simple presentation and a demo that describes the problem that we're trying to solve. And basically, we have two edge or two ends of the problem. One of them is that we want to automate relatively complex applications in OpenStack. And second, we want to make it very simple. Now, it sounds usually as an experiment, because usually the tools that we use to make applications simple are very different than the tools that we use to manage complex application. And what I'm trying to show here is actually how do we actually address both? How do we take relatively complex application and deploy it on OpenStack in a very simple way? So where are we? There are a few slides of introduction in which I will explain the concept. And then I'll end up with a demo that you could all try out, even from your mobile device, in which you could run Hadoop, MongoDB, from just your cell phone, if you'd like on, in this case, on HP OpenStack. Anyone here, by the way, using HP OpenStack? Raise their hand? No? OK. So one word about GigaSpice for those who are not familiar with being around in, I think, roughly 13 years in the market, started in distributed computing. One of our main customers is in the financial industries. And obviously, e-commerce and carriers. So we were not the regular startup, if you would like, that you would normally see. So there are a couple of names here. And you can see the mission statement. I'm not going to repeat that. The statement or the mission that we're trying to solve is really this one. On one hand, we had a lot of enterprises interested in moving to the cloud. On the other hand, they're having difficulties to move to the cloud. And we're trying to reach that gap. So I'm not going to say too many words about that. An interesting survey done recently about that was done actually by RightScale about the strategy that enterprises are taking to move to the cloud. Not surprisingly, even if they want to move to OpenStack, they would still take a hybrid approach. Hybrid really means more than one developer, even if they're both OpenStack providers. Enterprises wouldn't go with one provider. And that's kind of an interesting angle here because what you normally hear in OpenStack is that everyone's trying to build the entire thing. So you go to IBM, you go to Dell, you go to any of those players. They will promise you the world and will try to give you anything. But most enterprises don't want that. They want a hybrid approach in which they would be able to take IBM and HP and Dell, but also work with Amazon to set in workloads and so forth. So there's this interesting challenge in which they want to move to the cloud. OpenStack is critical to their strategy, but not only OpenStack. So that's kind of another interesting aspect that we can see here. So the current approaches that people have taken, many organizations have taken, I kind of try to list them into a simple list of three categories or three types of application. One of them is what I would call the IS-first approach. The IS-first approach means that I'm building a very nice, shiny infrastructure as a service that enables me to spawn compute resources on demand. The problem with that approach is that there is the application outside. And the fact that I have a very nice and shiny IS doesn't bring those applications. So I need something to bring those applications. In the case of Bank of America, for example, we're talking about 7,000 applications that are built in a pre-cloud world that I now need to be great. So even if I have an investor to build that nice cloud, I'm still going to have those 7,000 applications outside, and I'm not going to have them in the cloud unless I'm going to do something in a short time frame. So the problem with that approach is that it's really not thinking about the application. It just deals with a fraction of a problem. The other approach is using configuration management tools like Chef and Puppet, which is a very good approach. But it's still relatively complex because it requires a lot of scripting and a lot of work. And there's still quite a amount of work that we need to implement to actually get our application fully automated in the cloud. So there are good building blocks, but not necessarily as simple as we want it to be. And the last approach is the past approach, which basically means that you care about the environment and you tell users, if you want to run in the cloud, you develop to the cloud. Forget about the legacy. Forget about whatever you have. The cloud is a new world and right to the new world in a different ways in which infrastructure is less interesting and less important. This is a good approach for green field application. But again, I have those seven applications outside. What do I do with them? Obviously, that wouldn't cater to this group or this class of application. So let's see what Amazon was doing. So Amazon had, say, the four blocks that they listed earlier this year, which in my view defines the current cloud stack. And what you could see here is instead of three blocks that we used to think about when we think about cloud, which is the infrastructure as a service, platform as a service, and software as a service, we can actually see that if I take software as a service outside of that spectrum, there are four blocks. So they have a more fine-grained definition of that spectrum. And that's very important because it's actually coming in a lot of discussions. And what are those boxes? The first boxes is obviously the compute storage network, the regular IS that we used to know. On the other end, I think that's also a box that we used to know, which is called Pass, which is, as I mentioned earlier, for the Greenfield application. Now there are two boxes that were filled in the middle. One of them is the Cloud Formation, the equivalent of here in the case of the OpenStack, and a new box that is called OpsWork. So OpsWork is really a box that gives me a way to automate the deployment in a relatively easy way of, if you'd like, more complex application. In the case of Amazon, OpsWork runs on top of Chef. So it's actually tightly coupled with Chef. So it's a way in which I can describe stacks and the deployment of those stacks and talk to the IS in a relatively more controlled way. And you could see the two axes here are around control and convenience. So when we move to the extreme of pass, we're in a convenient, very convenient world. We don't think about infrastructure, but we lose a lot of control, meaning which availability, how I'm going to run my availability, how I'm going to manage DR, how I'm going to manage and optimize my performance, a lot of things are carried out of me for the good and for the bad. And if I'm on the other side, I have full control, but I have a lot of relatively large complexity. So OpsWork really feeling that void, if you'd like, that exists right now, also in OpenStack, that gives me high degree of control, but doesn't cost me that much in terms of complexity. So that's kind of, I think, the trade-offs that defines the cloud in general. We're trying to fill that box within OpenStack. And what you'll see is really, and a lot of the things that we're doing is really coming from that understanding. The product or the project, it's an open source project that is called Clarify. And actually, we're now part of the Solon project within OpenStack, which is, again, an open source project that basically does what Amazon tried to do with OpsWork. The important thing is that it's beyond open source. It's not limited to Chef-like OpsWork. It also works with Puppet, and it works with other configuration management tools. And that's a key in the orchestration, because we also recently integrated with Hit, on that respect. So the way it works is as follows. We basically take an application and script it. There is some sort of a DSL. Right now, it's groovy, but it's going to be yaml in the next release. The script basically defines the blueprint of the application. And the blueprint includes the following things. It includes the definition of how you start, install, and configure the application. But most importantly, how you manage it after it's been installed. So for example, how do you monitor? What are the KPIs that you want to monitor? How do you detect failure? What do you do in the case of failure? What do you do in a case of scaling? How do you scale it? So a lot of that is now scripted in one document, if you'd like. Then, again, comparing to the regular world that we use when we basically send an email to an operator that will do things for us. In this case, we have a document that is understood by not an operator, but a software entity that is called Orchestrator. So the Orchestrator can read that thing. That's the thing on the right-hand side. And have one leg into the input, which is the blueprint. And another hand in the infrastructure itself, which is the capacity and the resources. And what it does is basically matchmaking. It takes the requirements and match it to the resources that we have. And in this case, specifically, it will spawn the machines using the Nova API, in the case of OpenStack. Then it will plug in through SSH to those machines, because we don't have any agent pre-installed. So we can take any image that we have already. And it will start to push the software into those machines and install it. And later on, it will start it based on the dependency that we define in that blueprint. So it will start the database first, load the schema, load the data. And only when the database is ready, it will start the web container. And only when the load balancer is ready, it will start to register those web containers into the load balancer. So in one click, we get a fully orchestrated application without necessarily doing any manual stuff in that process and without complex scripting that we normally had to do otherwise. What does that really get me? It really gets me to the following two items here. One item is that I can take a fairly complex model and reduce complexity by having a much more consistent way of managing it. By means of consistent, the way I manage the installation, deployment, scaling, failover, monitoring looks pretty much the same whether my underlying product happens to be Hadoop or Tomcat or Ruby or whatever. That's consistent management in this case, because they're all scripted in the same way. And that's a big thing. The other thing that we get is the probability between different environments. So once my blueprint is scripted, I have a consistent way in which I can clone it between environments. So I can now spawn an environment within the public cloud, within my testing environment, within my QA environment, within my production environment. Something that is critical in continuous deployment and continuous delivery processes, because we're going to do that quite a amount of times. Also for the R, because in the R what we're basically doing is we clone the environment on another side. And it's important for us to clone not just the binaries, but also the SLAs, how we manage the failure, how I manage performance, how we manage the scaling of the application. And that abstraction that we have in place between the infrastructure and the blueprint allows me to clone that environment in a very easy way, in a very consistent way. The last point before the demo is that the other attribute of that probability that I have is that I can easily now have an environment in which I can have my existing environment and my open stack environment coexist. And in this case, that allows me to take my workload and run it, first of all, automate it and run it on my local environment, in which case we have another abstraction for that. It could be VMware, it could be bare matter, it could be anything. And slowly and gradually say, okay, the testing and the new version of it is gonna run on my public cloud or sorry, on my new environment, but my production is still gonna run on that existing environment because I don't wanna take the risk. Once I'm convenient with that, my open stack environment, then I'll do the switch and I'll move that. But I'm gonna have the same management between those two environments and therefore the process is gonna be a much smoother. So now to the demo, because we're running out of time. So basically the title here is how do I take a complex application and deploy it in a very easy way? How do I share that application in other ways? So we've done a work with HP Cloud to actually deliver a catalog if you'd like, very much similar to App Store, but it also combines the ideas from YouTube and you'll see what I mean by that. So I'm gonna switch to a demo here and let's see how long does it take me to actually deploy an application. So basically I can go to the cladify-source.org website and if you click try now you get to that screen and fortunately I have a resolution here, nevermind. So basically what I can do here is I'll pick one of the application here from the catalog. I can see that I can now click here on play and by clicking or play what I'm basically creating is creating a new instance. I'm spoiling a new instance and it will fork a machine, in this case in the OpenStack, in the HP OpenStack, it will install the binaries for that application and will provision the application on that. I can immediately wire up the management to view all that process and the management will also be provisioned for me automatically as part of that deployment and later on I can actually start the application. So if I go to the management console what I can see is that the process of deployment as it is taking place, I can see that the application will be constructed out of two tiers, in this case the web container and a database. This is for the free user, so in real life production we'll have a full cluster available for me. So for example, if I click here I see the dashboard of all the resources and application that are currently being deployed and if I click here I can see that there is a MongoDB instance being deployed and a new web container being deployed and I can see the relationship and the dependency being deployed. All that in less than a minute and that's kind of the level of simplicity that I can have. Now, let alone that and that's gonna be finishing soon. You can see that the MongoDB is already running. The Tonka should be plugged in in a second. We're not gonna wait for that because I'm running out of time. As soon as it's gonna be ready we'll see that thing here become a linked and I can actually try it out. So I think that should be finishing by now. Okay, so it's finished and it actually tells me to click here and that gets me to the actual application and I got the application with the database ready to roll out. Now the idea is that imagine that you have a catalog of those services available on your OpenStack environment so you could give your users of your environment the ability to try out new products and new releases and new versions and new blueprints. Now, more than that, let's imagine that I created a blueprint for my organization. Let's say that I have an SAP, I'm just kidding, but let's say that I have some, that application and that blueprint is something that I wanna share. Now in video on YouTube, the way I would do that is that we use email to share the link to my video and that's how we would share that. I imagine doing it for a BI system, for a subsystem, for whatever system. It sounds almost impossible, but in this case because we have the blueprint, what we basically sharing is a blueprint and a way to execute the blueprint so we can actually even take a very complex application because we're not actually shipping the binaries, we're shipping the definition of how to run that application. So I can use any sharing mechanism that I have, whether it's Twitter or whatever mechanism and basically either embed it in a blog and say here's a new feature, here's new versions, try it out and people can actually have that outside of that portal, they don't need to go to ClarifySource.org to actually run it, they can actually run it directly from the context of the description of the application and I can also share it in Twitter or email with other users just by providing the link to that blueprint. Now again, the thing that I'm sharing is how to run that plate cleaning application. I'm not sharing the instances that I'm currently running for that, I can just use a browser link and that's it. And others can use that same link to spawn their own version of that on their own environment. The last bit that I think is important is that one of the things that we didn't want to create is that it would be a black box model. And the black box model means that I have listing running and it's all nice, but how do I move it to production? I need a lot of customization from that definition. So the thing that we did is that we created a very smooth migration from that simple trial to production. So for example, if I'll provide here my cloud credential, I could use the same tool, but in this case it would run under my cloud account as if it was spawning the machines and installing that software on itself. So I have full control over where it's gonna be running and how it's gonna be running. If that's not enough, I can go and customize the recipe and change them to the size and type of orchestration that I want and then populate it into that catalog. I can decide whether it's gonna be in the public catalog or private catalog because the catalog is basically a GitHub. So I have all the version control that I have with GitHub. I can do pull requests, I can change things, I can clone things, I can run on my own environment and I can do a lot of things that I can deal with code with that recipe. So in that case, what I'm getting is the ability to try out, try out products in a very simple way. Just click, hustle free. I don't need to log in, register, do anything. And later on, run it under my account if I want to in a POC and later on, run it or download the entire thing and run it on my own environment under full control. All using the same tool. So I don't need to rip and replace the environment every time that I'm going through each of those steps. And with that, I think we're out of time so I'll finish the, go back to the slides here. So the thing that I wanted to kind of end up here is that we started with a vision that says that we can manage and deploy complex application in a very simple way. And I think what I showed you is that we can actually do that in a much simpler way than even the simplest application in Hiroko or even App Store can be dull. And it can be as simple as running a video on YouTube. All that is open source. All that is something that you could download and run under your own open stack environment. Obviously it's already running in open stack environment right now with HBCS. There is a new version coming out which actually takes that even a few step further. But that's kind of, I would say, the end of that. So thank you very much.