 Do we need to turn it on? OK, it's on. Hello. This is bright. Hello, everyone. Thanks for attending. Today, my colleagues are gay, and I will talk about how you can accelerate the path to cloud native application development. Wow, these lights are really bright. So we want this to be interactive, so please feel free to interrupt us if you have any questions. We will try hard to see you as you raise your hand. If not, please come forward and we can talk. You want to drive that, or I can drive it for you? I'll do it, so please. So let's talk about how you can start coding in two easy steps using Project Caspian. Now, before any of you raise your hands, let me ask the question, what is Project Caspian? Project Caspian is a code name for a product that we're launching at EMC World next week. It's an open stack and pivotal cloud foundry-based cloud platform that you can just roll into your data center and start using. It comes with all the hardware and software components, the servers, the storage, the network, and all the other software components, like load balancers and networking and Ansible for Configuration Management and HAProxy, EngineX, HCD, the whole deal, so that you can get up and running and start coding really quickly. Now, setting up open stack is painful. Setting up pivotal cloud foundry is painful. But setting up open stack and pivotal cloud foundry in a manner so that it is production ready, so that it can be upgraded maybe once or twice a year, so that it can be supported. You can maintain it. You can troubleshoot it so that it is available and stable is much, much, much harder. And that's the problem we're trying to solve. Quite frankly, the deployment problem of open stack is kind of solved. You have tons of tools out there that already do that. This is more than deployment. This is how you can support it, operate it, upgrade it, troubleshoot it in an easy fashion. So to summarize, Project Caspin comes with hardware, software, servers, switches, racks, everything all rolled into one. It's supposed to just work out of the box. And if you think of solutions on a scale from simplicity, at one end to flexibility on the other end, Project Caspin gives you a whole lot of simplicity at the cost of some amount of flexibility, somewhat like an iPhone. Really, really easy to use, but not that many configuration options. Now, while the hardware is EMC-provided, and quite frankly, hardware is the least interesting part of the solution, it is commodity hardware. Commodity servers, switches, direct-attached storage. And we require EMC-provided hardware, because if we own the full stack, we can provide the best possible customer experience, right from deployment to operations to support through upgrades. And really, what we've tried to do here is taken those parts of OpenStack and Pivotal Cloud Foundry that are really hard to get working, and package all these tools together so that it's just very easy to use. So two easy steps, deploy OpenStack, deploy Cloud Foundry, and you're ready to go. Once you've done these two steps, you can start doing your CF push and start scaling your applications. And you're all set. I'll talk about the OpenStack deployment side. My friend and colleague, Sergey, will talk about the Cloud Foundry deployment side. So OpenStack deployment has got really three subparts, but the first two are really professional services enabled. So as a customer, you never have to deal with these first two steps. The first is the hardware configuration, where our professional services come in and it takes about a day to perform all the three steps. So it's about one calendar day, really. In the hardware configuration, we set up things like the host IP addresses. We hook up our switches to the customer switches. We set up whether we use your BGP or static routes for networking. We set up our pointer servers to the DNS server in your environment, the NTP servers in the environment, just getting it set up. And after that, we have an infrastructure installation step where we set up a management complex, a management plane which hosts the management components for Project Caspian. So a UI software, our load balancers, and all the tools are going to really getting that cloud platform ready. And finally, we have the OpenStack installation, which is something the customer will do. That's the first step the customer actually deals with Project Caspian. So after one day, one calendar day, the customer is ready to deploy OpenStack. And then after deploying Cloud Foundry on top of it, and then customers have a really cool cloud-native platform. They can use OpenStack directly for running applications on it. Or they could use, and or they could use Pivot Cloud Foundry in conjunction with that to host applications on it. And that's a really powerful cloud platform. So here's the first screen that a customer gets to see with Project Caspian. And the user in this case is a cloud operator who is setting up the cloud so that it can be consumed by the end user's application developers. Think of the business units, or maybe different companies if they're a service provider. At this point, we have one user that's already been created in a local database, the cloud operator's username, which is what is used to log in. And this is a single point of control for management, maintenance, upgrades, and monitoring and reporting, the back-end operations. Now I should call out that the end user application developer interfaces are 100% OpenStack and Pivot Cloud Foundry compatible. So the standard horizon interface, the OpenStack clients, the OpenStack SDK will work as is. And that is 100% compatible with what's already out there. We don't intend to mess with any of that. And our goal is to ensure that developers have an open source industry standard interface for application development. So tomorrow if they decide to move away from Project Caspian, any investment they've made in application development still holds. They could potentially run those apps on another cloud, another OpenStack cloud, or if they're using libraries like Fog or JCloud, if they work with other public clouds or other clouds, they could work with that as well. So the end user application development interface is 100% open source, OpenStack, Pivot Cloud Foundry compatible. What you're seeing here is the back-end operator interface to manage Project Caspian, to upgrade Project Caspian, to how do you monitor, how do you troubleshoot, how do you report on it? That's what you're seeing here. So after you log in, this is the first screen you get to see. As you can see, this is a 20-node system. And let me highlight some of these. Before that, on the left-hand side, we have our pain to perform management operations, create accounts, look at the infrastructure, how you can report on it, how you change the settings. And here I'll highlight, we have a 20-node system out of which three have been used for management nodes, host a management complex, and the remaining 17 nodes are already available. So as soon as this comes up, it already discovered what's already out there, you get a view of your system. So let's see, now the next step you may want to do is out of these 20 nodes, the Cloud operator may want to provision a subset of those nodes for OpenStack, or what we call the Cloud Compute Service. So all the customer needs to do, literally, is select the nodes and click a button to deploy, and that's it. In 30 minutes, you have four Cloud Compute nodes ready to run your workloads. So let's see how that is. Over here, you get to see the list of all the nodes that have already been set up. You will find that three of them are the platform nodes that were set up when Project Caspian was set up, and the rest of the nodes are available. Now, at this point, I should call out here that these three platform nodes that you're seeing are a very low management footprint for a production-ready, scalable OpenStack Cloud. If you compare this with the management overhead of other products out there, you'll find they have about five nodes or seven nodes, and we have three management nodes for two racks. That's what we've tested. Two racks are about 84 nodes, and even if we scale more than two racks, we won't need to expand our management management plane. Three nodes are enough to handle it. Now, the reason why we have three nodes and not more than three nodes is, one main reason is we use a scale IO for block storage. And since scale IO has a very low overhead, it can run hyper-converged on the compute nodes, so the scale IO components, SDC and SDS, if you're familiar with that, they run on the compute nodes, so you do not need a separate pool of compute nodes to provide block storage. That's the key benefit of using scale IO in addition to other features like performance and all the other cool stuff, but that benefit allows us to have a very small management plane. And we use scale IO not just for Cinder persistent storage, which is the common case, but we also use it for ephemeral storage. So we have this one common pool of storage spread across all the compute nodes that can provide both ephemeral and Cinder persistent block storage. And that allows us to have a very small management plane. Lessor management nodes is good because that way you have more nodes to run actual workloads. So a starter configuration is eight nodes, that's the minimum configuration you have, out of which three are management nodes, five are available to run workloads. Three nodes are there because each service runs in active active mode, front-ended by three load balancers. So we have three copies of the Nova scheduler running, three copies of MySQL running, so that we can tolerate node failures. And then of course we have three load balancers in front as well that are being used to route the traffic across the nodes. So let's get back to the installation part. The goal is to deploy the cloud compute service. So you go to manage and you click add to service and now you see the nodes that are available. In this case I've highlighted the four nodes in which I want to deploy the cloud compute service. You select them, I've selected them, that's why they're highlighted here. So as you can see, and I hit the deploy button and that's it. We have a fully production ready, open stack environment ready for writing applications. You can run your heat on it, you can spin up VMs, you can provision storage, you can take snapshots, you can do all the cool things you do with open stack. So in one calendar day, you've got a production ready, open stack environment ready. Right from the time it got into your data center, because when it comes to the servers and switches and storage are all hooked up. And in one calendar day, you have a production ready open stack environment available which can be maintained, which you can upgrade once, twice a year, which is easy to troubleshoot, which is easy to support, you just call EMC and we'll take care of it. And that's a really, really cool thing. So here I just want to highlight the standard open stack operations. This is nothing unique over here. We do not want this to be unique. We want this to be 100% compatible with open stack. But so this is just to highlight, you can do the very, the basic things that you can do with open stack, you can spin up a VM, you can attach floating IP addresses. During our initial setup time, we took the floating IP block from the customer. They give us a floating IP block that we could use. We create our routers and networks and attach the interfaces, create the subnets who hook everything up. And you can directly associate a floating IP address and then attach a floating IP address to an instance. It just works out of the box. Maybe this may be a good time to talk a little bit about networking. So all the value of Project Casperin is in software. We have software-based storage with Scale.io. We have a software-defined networking. We use OpenVSwitch with Neutron, which, as you know, is vendor-neutral. We use VXLANs for the logical network. So when you, as an end-user, goes into a project and creates a network and creates a subnet, we use VXLANs to have logical network so you can have overlapping Amazon web services, VPC-style logical networks, and they provide the maximum flexibility. Now, internally, Neutrino, we use Layer 3 networking. So we get to use features like OSPF and ECMP. OSPF is a key component of Neutrino's availability and resiliency. Because as nodes go down, OSPF allows the traffic to be routed to the right location. And that's really important because failures do happen. This has come already hardware. The application has to be written with assumption that nodes are gonna go down. But at the back end, we make sure that we detect that and the traffic is routed dynamically to the right place. And then we have storage over here where we talked about Scaleio. We use Scaleio internally. One benefit of using Scaleio for ephemeral storage or ephemeral volumes is that it makes recovery from node failures and live migration really easy. Because if a node goes down, all you need to do is spin up a VM on another physical node and that node can access the shared volume and you're ready to go. And here's an example of how we can launch an application using heat. This is, again, a standard interface to set up a heat stack. The reason we're showing that here is that we believe that if you're designing a cloud native application, you'll split up this big application into small microservices. Some microservices are more suited to run directly in an infrastructure-to-service environment. Maybe in containers, you're using Kubernetes or Docker Swarm or something directly because you want more customization. Some microservices are well-suited to run in a pass environment. And over here with Project Caspian, you can have both these... You have an infrastructure-to-service environment available. That's what's open stack. We have a pass environment that Sergey will talk about. And you can have applications in both these environments communicating with each other, which is a really common use case. Now, I'll focus just a little bit on the operation side of things. So here you can see we have all the nodes and let's look at common operations. Let's say a node goes down. You may have to replace it. So all you need to do is click on the particular actions button and you have options to suspend the node or reboot the node or reset the node. But there's one operation I really want to talk about is this transfer node. The two use cases, two common use cases. Let's take one where your management node goes down. Actually, let's take another one. Let's say a compute node goes down. See how it's gonna happen. But that's not a big deal because your application is cloud native. It's supposed to deal with a failure. Either using heat or it's using some kind of water scaling or it's handling it itself. So our control plane is always available. It can start up a new VM. No big deal. The node still has to be replaced. You can suspend the services on the node. We can replace it. We'll automatically discover it. Great. But what if one node of the management plane goes down? That's a bigger problem. The neutrino still continues to run. Project Caspian will still be running. But now you have just two management nodes. And soon you need to spin up another management node. Now all you need to do as a user is click on the transfer button over here. And you select an available node and that's it. And you're ready to go. You have a new management node ready. Think how hard this would be if this were to be done manually. You would have to provision a new node, set it up, make sure it advertises its IP address, manually move the services over. Now that could be very error prone, very time consuming. Over here with the press of a button with node disruption to end user workloads, you have a brand new management node available. Let me show you the workflow. Over here you can see you select a node, the destination node, where you want to move them, where you want to create a new management node and you hit transfer. That's it. And another very common use case is when you scale Project Caspian. Let's say you start off with a single rack, maybe half a rack configuration. In that case, your three management nodes are in that half rack. There's only one rack, so that's the only place you can put it. But now let's say you scale up to another rack. Now you want to redistribute your management plane across two racks for high availability because let's say if one rack goes down. So now you need to move your three management nodes from rack one across rack one and rack two for high availability. That's where you could use transfer node again. You just select a node from another rack, click a button, and you're done. And you get HA because now your nodes are redistributed across the different racks. I think how painful this would be to do it manually without any disruption to end user workloads. Really, really hard to do. And this goes back to what I said earlier. Setting up OpenStack is one part of the deal. That's really not the most important thing. There are many tools to do that. But setting up OpenStack so that you can continuously maintain it. You can perform these kind of operations as you scale, how do you expand your management plane, how do you replace management nodes. These are the operations that really take up a lot of time. These are the operations that cause application disruption. And that's what we tried to solve over here with Project Caspian. Over here I just want to highlight the different components that go into Project Caspian. So you can see you have a full list of components over here. And there are two types of components. We have the platform components and we have the cloud compute or service specific components. Project Caspian was designed to be multi-service. So all those components that are applicable to all the services come on the platform component. So our Ansible for configuration management, our Elxtrack for log analysis, our load balancers, HAProxy, Nginx. All of these are the common components that are shown over here. And you can see we have three copies of them running on an active active mode. So you can see we have three copies of HAProxy that we use for TCP based load balancing. We use Nginx for HTTP, HTTPS based load balancing. And as an example, we have used MySQL with the Gallauder application. So three of them, they're all running active active so that if one node goes down, the system continues to run. And you can see that over here. The next example over here is for cloud compute. As you can see, we have our Nova controller running in active active mode. Same for other services as well. What I want to highlight here is another key value prop is we use containers internally to deploy all the components, not just OpenStack services with all the components in Project Caspian. And the reason we do this is because all these components, HAProxy, Nginx, HCD, the OpenStack services, Ansible, all of these have very different operating environments, own dependencies and runtime libraries, their own quirks. And deploying them and running them in containers makes it really, really easy to isolate them, maintain them, and support them. And if you want to expand a service or you want to add new services, all you need to do is wipe out the existing containers and replace them with new ones. So as you can see, Project Caspian was designed with containers in mind from the ground up. And containers play a key role in making it stable, reliable, and easy to use from an operator standpoint. Now that you have a cloud production grade OpenStack environment ready, I'll hand it over to my colleague, Sarge, who will talk about deploying Cloud Foundry on top of it. Thank you, Ajay, I appreciate it. Yes, please. Sure, so let me repeat the question. The question is, it'll be good for me to share what hardware configuration options are available. So we have different hardware configuration options available from server standpoint. They go from 12-core, 16-core, 20-core servers with 400 gigabyte SSD to 800 gigabyte SSD drives for each server. So depending on your use case, you can select the type of hardware configuration you need. And with each release, we add additional hardware configurations. So you have a menu item to choose from. And the way you will really work that out is you'll know how many VMs you want to run. You'll know what your oversubscription ratio you want to handle based off your workloads. You back calculate that and you figure out how many physical servers you want. If you're doing from the Pivotal Cloud Foundry site, you'll start with the application instances, backtrack to the number of physical nodes. Thank you, Ajay, I appreciate it. So the native hybrid cloud solution idea was to use the Project Caspian as a sort of a black box that introduces cloud-native infrastructure and then build on top of that to enable Pivotal Cloud Foundry deployment in a matter of hours, probably within one day to quickly enable developers and DevOps practitioners to start coding in a matter of days. As a solution, native hybrid cloud treats Project Caspian as, in essence, a black box. However, in talking to our Pivotal colleagues or in talking to customers such as yourselves, we realize that the majority of the time spent bringing up Pivotal Cloud Foundry in a typical IT infrastructure really spent on analyzing the network and configuration settings required to correctly provision the Cloud Foundry in identifying the endpoints and integration endpoints with identity providers such as LDAP and AD with the DNS network settings and all that. Building that on top of Project Caspian enabled us to actually retrieve all the settings and configurations automatically, significantly in speeding up the deployment process. So the modern developer solution really builds on top of a Project Caspian by introducing Pivotal Cloud Foundry as a cloud native application development platform. On top of that, a multitude of different services can be used to really build a, again, modern day DevOps ecosystem. So some of the services, and I'm not sure how well you can see that on the screen, but as you know, Pivotal offers the number of marketplace services for integration into DevOps process such as Jenkins for CI CD, different tools for application performance management, tools for monitoring and reporting. And we've integrated some of these tools into our solution right off the bat. So to spend just a few seconds on the description of what is the Pivotal Cloud Foundry, there are two different approaches to building a modern DevOps infrastructure or platform. One of them is the unstructured approach. The other one is a structured approach. Unstructured approach could be illustrated by taking different components of the infrastructure, adding on top of different development tools, integrating it all at the customer side. Typically, that is a very time-consuming and costly approach, especially for the customers that are from the platform is not necessarily their business. Now part of that, the structured approach that Pivotal Cloud Foundry introduces brings together all the components required for quick deployment of a platform as a service and enabling developers to start coding pretty much right away. So in essence, Pivotal Cloud Foundry offers the native platform for cloud native applications and immediately delivers the reach operator experience both for the developers as well as development managers and platform admins. The Cloud Foundry itself is an open-source project. Pivotal has taken this open-source project and built the ecosystem around it with support services, with the deployment services, with incorporating that in the deployment tools that actually enable developers to start coding very quickly. The majority of the operations that Cloud Foundry allows are done through the set of RESTful APIs. The applications themselves are run in the containers that are presented to the developers simply as an organizational workspace. And then, of course, it allows the seamless scaling up or scaling out of multiple applications either by increasing the application footprint in terms of the memory or disk space required or increasing the number of running instances of an application. At the same time, that takes care of not only scalability, but also resiliency of application deployment. As you're running multiple instances of the same application across the entire platform, across the project Caspian infrastructure, your application is protected from downtime by a number of instances of the same application, running in the different containers, in the different VMs, on the different nodes, across the different racks. So by using the Pivotal Cloud Foundry resilience approach, we solve the problem while application is pretty much 100% application uptime. It also enables a seamless upgrade of the application code. Developers can push the new code without necessarily introducing downtime in application availability to their end users and customers. Let me go through some of the animations here. So the native hybrid cloud tools that we've developed on top of Project Caspian are designed to be installed at the customer side by professional services, specifically EMC professional services, which also facilitates a handoff to EMC support and enables you to simply pick up a phone and call one number for all associated issues, whether it's the issue related to Cloud Foundry itself or to the networking to the hardware or to the Project Caspian underlying infrastructure. What we've done is we developed components that Pivotal deployment requires for successful operation. Specifically, as AJ mentioned earlier, the Scale.io is an underlying layer of persistent storage in the Project Caspian. And it acts as both ephemeral and a cinder block storage. We've taken it a step further. We developed a S3-compatible Swift cluster that runs on top of Project Caspian on top of Scale.io storage and acts as a resilient and highly available blob store for Pivotal Cloud Foundry. And that's where all the application data is stored. Application data, the build box, application code, droplets, et cetera. We've developed an installation tool that is delivered to our professional services as a virtual machine image that simply gets imported into a glance and then spun up as a VM instance on top of Project Caspian in one of the tenants. And then it allows to run through a series of prechecks. These prechecks reach out to the underlying infrastructure and retrieve the associated networking information of the NTP, DNS, and the load balancer configuration and enable the configuration of the subsequent deployment of Pivotal Cloud Foundry. The three main components of the Pivotal Cloud Foundry that are part of the solution and that enable, of course, the further development are the Pivotal Apps Manager, which then spins up an elastic runtime. In fact, the elastic runtime is the Cloud Foundry itself, the open source Cloud Foundry. And then on top of that, we're adding another component called Pivotal Apps Metrics, which is used for a real-time and historical data collection, monitoring and reporting on the state of health of Cloud Foundry itself. That component is also used to collect a logging data. So we retrieve all the logs from not only Pivotal Cloud Foundry, but also from the blob store itself and from all the associated miscellaneous components to make sure that as a platform admin, you will have access to logs over time and be able to analyze these logs both historically and in real-time. The installation itself is a pretty straightforward. Once the Project Caspian system is deployed on site, the same deployment engineer retrieves the image of native hybrid cloud deployment VM and then from there proceeds to verify the LDAP configuration, the connectivity to the external log system, firewall and load balancer configuration, NTP server configuration, et cetera, to ensure that once the Cloud Foundry is up and running, all these components are accessible. We also look at the footprint of the tenant, whether there are enough resources available for deployment of the Cloud Foundry given sizing information collected from the customer when the sizing calculations are performed, calculating back down from the number of application instances and size of the applications to the number of execution agents, such as the Diego cells and ultimately down to the number of VMs or to the CPU cores, memory footprint and disk footprint. So all these calculations are done during the deployment by the tool and compared to the resources available in the tenant. If any of the resources are falling short, the tool will inform the installer engineer to go and adjust appropriate quarters through Horizon or through the API in Caspian project. As itself, the native hybrid cloud lives in a tenant namespace on top of the project Caspian. It uses a standard open stock APIs to authenticate the keystone to interact with NOVA, Cinder and Neutron to create VM instances, assign the private and floating IP addresses to these instances, interact with the Cinder to create and attach volumes to those VMs and create snapshots and then convert them into volumes for a backup and data protection. At the same time, the installation tool collects all open stock related information, such as the network instance IDs, the API endpoints for keystones, Cinder, Neutron, and NOVA, and then imports that information into a configuration file that is used by pivotal cloud foundry deployment. Once the deployment is complete, we protect the installed configuration by taking immediate snapshots of all the configurations. So we back up the native hybrid cloud components, such as a Swift cluster, such as installation VM, monitoring and reporting VM, as well as some of the installation settings from the cloud foundry itself by exporting it into the manifest file that's stored locally. And after the backup, we continue monitoring both the cloud foundry itself and underlying components in real time, as well as collecting this data historically. These data are presented in a series of reports with associated alerts on the threshold violations, for example, or on any of the operational thresholds breaches from cloud foundry. We look at not only the state of health of the pivotal cloud foundry, as well as the operational components and operational metrics, such as the number of Diego cells available or the number of containers available for application deployment. In terms of the data protection, I already mentioned the initial backup of both the pivotal cloud foundry and native hybrid cloud components. But at the same time, we continue taking the snapshots over time on the user-defined schedule. During the deployment process, we actually offer flexibility to enter a specified backup schedule, whether it's 6, 12, or 24 hours. And these snapshots are taken over time and stored in the ScaleIO persistent data layer. Once, if the failure happens, the customer will have the ability to actually restore either S3-compatible blob store or any of the components of pivotal cloud foundry from the backup with today manual procedure. We're actually looking to automate it in the next release. However, all the procedures are well documented and pretty straightforward. It really consists of spinning up another instance of the VM from a snapshot, from backup, and then reattaching an existing volume since the volume protected by the ScaleIO persistent storage layer. Monitoring and reporting component of native hybrid cloud is really a single point of a single place of data collection, reporting, monitoring, and alerting. It provides not only runtime reports on a state of health of individual virtual machines comprising the cloud foundry, but also historical information that covers cloud foundry components themselves, as well as the application developer's activity in the cloud foundry. So we monitor the applications and application instances as they're being published in the cloud foundry. We collect their footprint information over time. We record the number of instances of which application, number of service instances. And from there, begin to generate the reports. These reports are available either through the user interface or could be exported in a form of web page, the PDF, or Excel files, et cetera. In addition to the operational reporting, we provide a financial insight that really looks at the showback information available from cloud foundry, how much usage hours are generated by each application, each service, and for given organizational space. We use a couple of mechanisms to collect this data from cloud foundry. JMX provider is a native component of a pivotal ops metrics. We use that as well as the set of API calls directly to the cloud foundry to collect the data from application layer. And from a logging standpoint, we built in an open source Elkstock component that retrieves the logs from both native hybrid cloud components and the cloud foundry components into a log analysis platform. That log analysis platform is available as a part of the solution, but at the same time offers customers ability to export the log in real time to a log analysis, external log analysis tools such as Splunk and others. In addition to the logging itself, we provide a set of dashboards that at a glance display the performance of native hybrid cloud, the pivotal cloud foundry components, as well as alerts that exist in the system. Each of these dashboards is available for drill down for more detailed information, for administration, support, and troubleshooting purposes. And as I mentioned, the Chargeback or Showback, really depending on the terminology used in your organization, offers the ability for business units to understand in greater level of details exactly how they're using underlying infrastructure. We're actually using that for some of the service provider functions as well, because this information could be easily filtered and extracted for a given tenant or a given development department and provided to managers or the financial administrators to understand how they should plan the development capacity going forward. So with that, it's time for questions. Okay, if there's no more questions, then we'll proceed with the ruffle. All right, so we're ruffling out Amazon Echo. I'm sure there's gonna be a lucky person here in the room. And the lucky winner is 970-377. There we go, we have a winner.