 Hi, everyone. Welcome to our session for Cloudify, and then V on OpenStack. I'm Arthur Bresen, director of product management for Cloudify. I have Adam living with me. He's one of our core developers for Cloudify. And we're going to be doing a live demo today. So that's going to be interesting. So just to set the stage and give you a little bit about Cloudify, what it is and how do we operate? So Cloudify is a cloud orchestration platform. If we look at the Etsy and V architecture diagram, Cloudify fulfills the orchestration box and the VNF as a generic VNF manager. Cloudify is an open source project, which we offer a commercial product based on the open source project. And it is a Tosca based. And I'll talk a bit about Tosca, what it is and what we can do with Tosca. And the cool thing about Cloudify and what makes it really unique in this space is that it's highly pluggable, which means essentially that you can plug almost any framework that has an API to it. And you can use that with Cloudify to orchestrate those resources on that public offering. So if you look at the many spaces that we operate in, essentially Cloudify is a pure orchestration framework, which is based on Tosca. And it integrates with many components. So obviously, you know, the basis of any workload that you run on usually is, you know, it's run on some sort of infrastructure. Usually it's infrastructure as a service component. So, you know, obviously, in our context today with OpenStack, we essentially orchestrate workloads on top of OpenStack. But what's one of the unique aspects to Cloudify is that it's based on a pluggable approach, which means we can run and use workloads on OpenStack on VMware or AWS or any IS out there. And also we can integrate with different other components, for example, as configuration management to do the provision of the application itself on top of the VM. We can integrate containers on top of that picture and obviously a bunch of network services that are deployed inside the virtual machines that are installed alongside the OpenStack deployment, providing networking services to the OpenStack infrastructure, et cetera. So Cloudify plays along with these components, and as I mentioned, it is based on the TOSCA standard. So what does it mean? Why do I mean by that when I say TOSCA standard? So TOSCA stands for Topology Orchestration Specification for Cloud Applications. Essentially, TOSCA is a spec to describe applications. Okay? So you describe the topology of your application, which looks something along this little example diagram. So you describe the different components. You can say, I have an OpenStack machine. On top of that OpenStack machine, I have a container and an SDN controller installed within that container. Now, this description is written in a YAML file, so it's very simple and easy to put together, but also very simple and easy to read and change if you need to introduce changes into architecture. It's declarative, so I declare all the components that I use, and I actually can reuse the components that are part that construct my application. So for example, I can say, this is my VNF. This is my application. It constructs several tiers, and I can just reuse the components within that in different other VNFs and also mix and match all of them. And do composition between multiple blueprints, essentially allowing service function chaining and essentially integrating and interoperating between multiple network components. So the second part of the task is the workflows, which actually what makes the whole thing really work, right? So once I build my application topology, we have workflows which are built into the product itself, into Cloudify, and these workflows can essentially access the topology and run logics based on the topology that I described in my application Blueprint. So obviously, the very default common workflows are installed and uninstalled. Everyone wants to install the application and be able to uninstall it, although not many of them actually uninstall applications. And you have very common workflows such as scale and heal, right? For example, I see that I have a lot of traffic on my VNF, on my router, on my virtual router, for example, and I need to scale it and introduce another virtual router and part of the logic of the scale, I need to also steer part of the traffic to the new router, right? So this falls along the scale and heal workflows. Now, these workflows are declarative. It's not that I write a script that tells the orchestrator what to do. Rather, I run through the topology and based on the topology, dynamically I can produce and execute these workflows. Now, in addition to that, to those declarative scripts, declarative workflows, I can also create my imperative workflows that I need them. For example, I have some sort of custom operation to do some maintenance on my system or to update an internal mechanism to let it know, for example, that I've registered a new network service. So this falls along the lines of the imperative workflows, which I can also produce and I have full access to the topology based on that. So, for example, I need to update my inventory mechanism, for example, that I onboarded the new network service, but I need some parameters or credentials from the newly deployed environment that need to pass along to that inventory mechanism. So that I can do using my imperative workflows. Now, the third part to Tosca is the ability to execute policies, right? For example, I've realized that my network traffic to my router, for example, is reaching, you know, 80% of capacity and now I need to scale. So policies are the place where I define what should happen at a certain event. So to implement that, we have a policy engine as part of the product based on an open source project called Riemann IO, which is an event processing mechanism which is built in. And using that, essentially you can plug the monitoring metrics that come from the application itself to trigger workflows that you have in the mechanism. So I write this Tosca blueprint and essentially Cloudify reads and interprets that Tosca blueprint that I have for my specific application. And then Cloudify essentially works with multiple plugins to enable and execute that specific blueprint. So we have different types of plugins. Essentially, our plugins are independent, which means I can create a new plugin for almost anything out there with an API. And that plugin essentially would produce new node types that I can use within my blueprint. And when Cloudify reads the blueprint itself, it triggers the correct implementation, which is part of the plugin, which triggers the API endpoints. So in the previous example, I had an application that runs in OpenStack. What Cloudify would do is trigger an API called toOpenStack to create a new virtual machine, configure Neutron, install the virtual router on top of that, and so on. So nice talking, but it's demo time. So let's give a little pray to the gods of demo to make sure that things work correctly. So just to set the stage for Adam here. So the demo, we're going to have Cloudify managing OpenStack controller. And also in this demo, we have a RUD controller and RUD hardware, which is installed in New York. And essentially, they run an Atex device that runs compute nodes. And we control that environment. So essentially, part of the blueprint we have, we would provision a new VM for two different corporations that run in that building. And then Cloudify would provision the VNFs on that machine. So in this demo, we're going to have Fortigate, which we're going to be focusing on. But you can essentially trigger and provision any type of application workload that you'd like. And just to make things more complex, we have another site which runs in London just to make things more interesting. So Adam, this stage is yours. All right, guys. Hi. So we reached a live demo part of this session. And hopefully, if everything goes well, then we're going to see some pretty cool stuff. We're going to assume that for this demo, we have two OpenStack compute nodes, and they're in two separate places on the world. And we have one OpenStack controller that connects to both of these OpenStack compute controllers. Now, for this demo, what we've actually done, all right, you know what? For this demo, what we actually done is we took the application that Arthur just, the application topology that Arthur just displayed, and we've modeled it in a TOSCA compliant Cloudify blueprint way. Now, what that means is now that we have a bunch of nodes, some of them are infrastructure-related nodes, and some of them are application-level nodes that are defined in my TOSCA blueprint. And what I'm going to do is I'm going to take that blueprint and I'm going to upload it to our Cloudify Manager. Our Cloudify Manager is something that I set up earlier. That Cloudify Manager sits on my OpenStack environment and is actually in charge of actually executing our blueprint. So we're going to start by running our simple Cloudify command a second. No. Great. So what it's actually doing right now, and we're going to see it in just a few minutes, is it's going to take this blueprint and it's going to upload it to the manager that resides under my OpenStack environment. So as we can see, the blueprint has been uploaded to the manager, and now a new deployment environment is created. The deployment environment is essentially an environment where we install all the blueprint's dependencies so we can actually provision all the resources that we need. So right now what's happening is that we're taking our FortiGate application and we're starting new VMs and we're starting a new VM to have the FortiGate sit on it and we're starting a new VM to have this SNMP proxy service that we'll get into in just a few seconds. So let's just skip right over to our Cloudify Manager. And the Cloudify Manager also has this WebUI that we can write. Also has this WebUI so we could get a better view of what's actually going on with my topology. Great. So not so great. Could you make it smaller? Okay, so as we can see, hopefully, wait, I'll just make it a bit larger. All right. So the workflow isn't done yet, but it's going to be done in just a few seconds. All right. So let's just have a quick look at our application topology or what we just set up. Now, we have two buildings. One is in New York and one is in London. What we've done is we've taken the building in London and we started a new VM on it. And on that VM, we've installed the FortiGate. We've provisioned another VM, which is essentially this VM, SNMP proxy VM. And this VM's job is to collect matrices from our FortiGate service and pass it along to our Cloudify Manager. Now, the Cloudify Manager can decide what it should do according to policies that we've defined. So as we can see, we've also provisioned three networks and these three networks are right over there in the top layer. Generally, what this means is now we can start a VM on each one of our buildings and have it connect to either one of those three networks. One network is actually the network that's connected to the Internet. The other one is a private local network and the third one is a network that we've generated for our Cloudify Manager. So we have three networks. We could start VMs on both sides of the world that could connect and talk to each other using each one of those networks. And on our application topology, we can see our FortiGate. We can see a router that's also been provisioned. That's generally it. Now, that's nice. We've seen a deployment and it worked as expected. That's great. But what about day two operations? Day two operations are something that you would have to do. You would normally have to do on regular basis. And for that matter, we're going to see a demo of exactly what's going on. Okay, so we're going to take the FortiGate IP just to make sure that it's running and we can see that everything is running. All right, so you don't really have to know FortiGate to get this demo. We're just going to have a quick look at what we've provisioned for this FortiGate. Okay, so go on under networks and interfaces. And here we could see the three ports that we've provisioned for our three networks. Port number one, this is the one network actually supports protocols such as HTTP, SSH, SNMP, HTTP, and FMG access. And for this example, I've chose to execute a workflow that will update one of these interfaces with new protocols that we want to maybe add, I don't know, later on after deployment stage. So to automate that, we've actually created a custom workflow that will add one second deployment. I'm going to need this string. And what I'm going to do is I'm actually going to run my custom workflow that will actually add, say, to port to system interface, SSH configuration. This is the actual configuration that we're going to use to execute the script on the remote machine. Property name is allow access. And we'll add SSH and HTTP. And we're going to hit the confirm button. Go back to our FortiGate client. As we can see, port two doesn't support any protocol for now. And hopefully, once I hit one second, we'll see the workflow just finished. We'll see the state of the workflow. Great. And it finished successfully. And now we hit F5. And hopefully everything worked as planned. Great. So we can see that these properties have been upgraded. Wait, that's not it. That's not all of it. So we still have another part to this presentation. And I have three minutes, so I'm going to make it quick. So we'll just run back to our topology. And as I mentioned before, but I haven't really gone into it in detail, we have provisioned an SNMP proxy. Now, generally, this solution is one of our many solutions for monitoring an application. Now, in general, FortiGate runs on its own operating system. So we're kind of having a hard time installing our agent on that operating system, because it's not a very custom operating system. So for this solution, we actually chose to use an SNMP proxy. What we actually do is we have a machine running that SNMP proxy and that SNMP proxy connects to the FortiGate API and retrieves matrices from that FortiGate API. Now we can plug them on to our Cloudify manager, and we can get actual live monitors on our FortiGate instance. So if we go on to monitoring, we can see that currently, nothing is being monitored, all right? But we have defined in our blueprints our monitoring rules, and therefore, we should be able to connect to our SNMP proxy and retrieve these matrices. So, right. Edit. And under the series, I'm going to... FortiGate exposes a lot of matrices. We've chosen just a few for the sake of this example. Oh, my God. And I'm going to choose the CPU matrix, and I'm going to edit it and save it and go back to monitoring. I'm sorry for this. Back to dashboard. Great. And we've set up a view for our matrices that we're getting live from FortiGate, and we're out of time and it's exactly on time because we're done with this demo. For any questions, Arthur and I will probably be here after this presentation, so you could come and ask anything you'd like. Thanks. Thank you.