 Hi, my name is Tom Bressy, and I manage the past product management team at HP Cloud. And what I'm going to talk to you today about is all the different managed service offerings we've recently brought to market. So before we get started there, why don't we go ahead and talk about some of the common reasons that we've looked at and the things that brought us to develop these services. A real common deployment architecture in public cloud obviously always begins with an application server. There's something that your developers are doing or something that you're doing that brings unique value to the market. That's the reason that you got into the business. So generally those applications, if you're going to deploy them in the cloud, you want to have a backend data store. It makes a lot of sense that those backend data stores are going to have some level of high availability or some level of mirroring. You want to have a real consistent experience of the data backend. You want to make sure there's some redundancy built in there as well. So if you're successful, you'll attract more and more users, which means you're going to have to scale out a little bit to meet all that load, which means you're going to want to put a load balancer up in front of it. And since your customers are generally not going to want to interact with you via an IP address over the web, you're going to have some kind of snazzy marketing name that goes along with your website. You're going to have a DNS layer as well. So remember what we started with here was just this one application server. That's where your unique value is. But look at all the window dressing that we put around it. Generally I'm guessing that your developers are not interested in being IT ops managers. They don't want to be the kind of people who are spinning up instances manually, SSHing into them, running your app gets, being experts on load balancer software, being business development people going out and securing relationships with DNS providers. This is IT overhead that you just don't need. So given all that, we've seen some use cases emerge that we think make a lot of sense for a public cloud provider to meet. So given that all that you're really trying to do is deploy some unique value in your application layer, we want to enable you to focus on that. Rather than having to worry about all the window dressing that you're required to deploy with the public cloud, we want to make sure that we can alleviate as much of that burden as possible and allow you to focus on your unique differentiation. We also want to make it easy for you to do. All these developers that we interact with are very comfortable working with APIs and CLIs, and some services lend themselves very well to a console like DNS. Generally, a DNS experience can be real, real easy in a console where you would prefer to interact with the database programmatically. So we want to give you the ease of interface flexibility as well. And then the third primary thing is that there are some features that are going to be difficult to develop that are going to require a lot of unique expertise, like implementing HA with databases. We don't think the developers should have to come up with that information on their own. We can provide you that in a very easy to deal with way. So in response to all these things, HP Cloud has built out and just recently brought to private beta five managed services that we believe greatly increase the ease of deployment when it comes to cloud applications. Load balancing, relational database based on MySQL, DNS, a monitoring service. And just today, we announced a messaging service. So these five services basically represent a total of, at the end of making seven API calls, you can have a functional load balancer, a functional database, DNS with an associated A record, a metric stream via AMQP, and a messaging queue that will allow you all that window dressing around your application to showcase your application's unique value, seven API calls. So that's the kind of ease of user experience we're trying to deliver to you. Now, these programs are all in private beta. So what we're talking about here is that these are managed services. Like I'll go into the demo portion of this. I'll exercise the API for the load balancer. I'll exercise the API a little bit for the messaging service that we just announced today. But the thing I want to emphasize here is that we're transferring the burden of having to install, configure, and manage these services from the user back onto our service. That's the primary value that we're providing here. In addition to that, we're making it very easy to interact with. We always start launching services with an API tier. So what I'll show you today is the API. Fast time to service ready was also a key use case. So for that example, when we look at the load balancer in particular, what we maintain in the background is we always have a pool of provisioned load balancers that have the software installed, configured, and ready to go, but are not assigned to a tenant. So you don't have to pay the penalty for Nova Spinup to get an active load balancer. You can make an API call. A load balancer is pulled from our pool, assigned to your tenant, and then with the configuration parameters that you use through your API call, it's pressed into service within your tenant. I'll show you how that works in just a moment. And then the very last bullet, we want to encourage you to take use of these services during the private beta period. We're making it free of charge. So please, come in, kick the tires. We've provided feedback channels. We want to know what you think of these services so that we can make sure we're meeting your use cases. So with that, I'll stop going through PowerPoint, since I think a lot of us get death by PowerPoint in our normal jobs anyway. And we'll look at some demo content. So nearest and dearest to my heart is this load balancer program. So what I'm just using here is a REST client called Postman. I've already gotten my authentication token. So I've already gotten the service catalog listing, and I've already kind of driven it back into this demo environment. So what you're going to see here is if I ask the API, tell me what kind of load balancers I have provisioned within my tenant already. Here's what we get returned. So I'll walk through a little bit some of the capabilities of the Private Beta program just by showing you what's already provisioned. So we have the ability to balance traffic on HTTP and HTTPS protocols, ports 80 and 443, as you might expect, during the Private Beta period. During Private Beta, we're doing HTTPS via TCP. We're looking at doing SSL termination in a future release, but right now we provide those two traffic balancing capabilities. We're also providing the ability to do two different routing algorithms, round robin and least number of connections. And then obviously your back end nodes. So let's press down a little deeper and look at this load balancer in a little more detail. Oh, pardon me. It's good to have that out of the way. There we go. So I'll take that particular load balancer ID and put it into the URL, and we'll look at this load balancer in a little bit more detail. So you can see the block in the top half of this response, the top section of this response is similar to what we just showed. But now what we're seeing is the details for the nodes themselves. And a little deeper down into the deployment, you're seeing the IP address, also the type of IP routing that's being used, IPv4, IPv6 is playing for a future release. And this load balancer is actually active right now. So feel free to ping away on that IP. I plan to have a three tier architecture set up for this demo and actually encountered a slight challenge prior, but I'll have it up and running tomorrow, so please feel free to bring it back again. Couple other things I'll show you around this. During the private beta period, if you run the list limits API call, what you'll see is that each tenant is allowed to have up to 20 load balancers, and those load balancers are each allowed to have up to 50 backend nodes. And remember, this service is free of charge during the beta period, so please come use it, kick the tires, run whatever you'd like against it, and send us your feedback. We wanna make sure that we understand what it is that you're seeing, and we wanna make sure that your use cases are met. I talked a bit about routing algorithms, so notice we support round robin and lease connections. All I'm doing here is exercising the API backend for the service. All these calls are being made in real time. And if I wanna kick off a load balancer, I said a moment ago that all I have to do is seven API calls for to get all five of those services up and running. So here's an example of that. With one API call I'm specifying name of load balancer, port, routing algorithm, type of traffic, all my backend nodes. So if I hit go, what's returned at the back half of this call is that same output of the list load balancers detail command, except for this load balancer is now active in serving traffic. So if you wanna ping this IP right now, you'll find that it's already actively serving traffic to these backend nodes. We already had a load balancer in a hot standby pool. All that happened when I hit go on that API command was that load balancer that was idling was assigned to my tenant. All the configuration parameters were all laid on top of it and it was pressed into service. So let's shift gears a little bit and talk about messaging. So what we've provided here is a queuing service. So currently I have one queue created called demo topic. Let me go ahead and delete that so we can start fresh. And we'll create a topic. So as you can see here in the API call above, I'm basically specifying the name. I'd like to have a topic called demo topic created. So I'm running the list command. You can see it was successfully created. Let me go ahead and push a message into it. This is my sample topic message. Very creative, isn't it? So what I get is a return. This is the, gives me the id of the message. Now let's go ahead and get those messages out of the queue. Returns the id and the body of the message. So this service is implemented a slightly different than the load balancing. The load balancing service is implemented in a single tenant manner where I have hot standby load balancers that are just waiting to be pressed into service and have all my API arguments overlaid on top of them. So they're configured in real time. The messaging service is deployed multi-tenant. So the service is always active. And I believe that the latest benchmarking we have, there's a variable amount obviously depending on the message size, but we can handle up to 100,000 messages per minute. So there's an inverse relationship between the size of the message and the amount of throughput that we can have through the queues itself. So the smaller the message, the higher the number. So that's the very quick tour through what we're doing with HP Cloud managed services. So please come into a, come to HP Cloud, get an account and check out the services. Very simple and straightforward. Go to products and services, platform services, and go to use the request forms for access to private beta. All of our services are free during this period, primarily to encourage more utilization and to get your feedback. So it looks like we've got about four minutes left. I'm happy to take any questions that you might have. Or if you don't wanna ask me questions here, we can talk afterwards or please come by and visit us at the booth. Be happy to field any questions you have about managed services and we'll take it from there. So thanks a lot for your time.