 Hi, everyone. So we're just starting. Great. Our session today is about orchestrating hybrid microservices with Kubernetes on OpenStack. We're from Gigaspaces, Cloudify. I'm Sivan, and this is Ron, and we'll just get started. So we heard a lot this morning about trends in the cloud environment and about how microservices are picking up Kubernetes as often used to manage them. We're talking about Linux containers, orchestrator, authored by Google, open source, quite popular. And it has quite a few cool features. So you can do load balancing with it, auto healing, scaling, really orchestrate big parts of the flow. But what if you wanted to run Kubernetes in a hybrid environment? What if what you had are microservices that are not all managed by Kubernetes? Or if your cloud was hybrid and of different technologies? What if you also wanted to add a few additional lifecycle stages, like monitoring, like auto scaling? And what if you wanted to have a single pane of glass to manage your entire environment? This is what we'll be talking about today with a demo. So if those are of interest, Cloudify can come to the rescue. Cloudify is an open source product by Gigaspaces. And it's an orchestrator based on Tosca. So we use Tosca, an emerging cloud standard, to model workloads. Cloudify is technology agnostic. So as you can see in this diagram on the right, it can work on top of many different technologies. So whether it is cloud or base virtualization, like bare metal, whether we're talking about cloud native applications like Google Today with containers or just VMs, Cloudify can manage that, while integrating with the entire DevOps tool chain. You can use Cloudify to orchestrate the deployment, but then add to it additional lifecycle management stages, like monitoring, scaling, healing, and add custom workflows as you go. The architecture is very plug and play. And as we'll demonstrate, it was very easy to build an additional plugin for Kubernetes to manage hybrid workloads. So without further ado, we'll move to a demo. What we'll talk about is how to manage an application, which is comprised of two services. One is MongoDB, one is Node.js. We call it hybrid management on OpenStack because one of them is managed by Kubernetes and the other isn't. And we'll see how Cloudify can help in automating the flow of deployment and configuration, including setup of everything needed for Kubernetes and Docker, including the master and minion setup. Hi. So it seems like the Wi-Fi here is kind of flaky, so we actually prepared in advance. This is like the post-deployment stage. We wanted to do this live, but this is how it would look in the Cloudify UI post-deployment. The application we're talking about is a Node.seller application. It's a simple web app which is comprised of two services, MongoDB backend and Node.js web server. And it's pretty much just wine seller sort of application. So it took about 10 minutes to run just a moment. These are the logs of the deployment execution, so we wouldn't have time to run it right now anyway. And what we can see here, basically, is you have the MongoDB as a separate host, actually an entire separate tier, for that matter. And that tier is managed completely by Cloudify. So Cloudify raised this host and the MongoDB inside of it. And then there is the other tier, which is everything which is Kubernetes-related, which in this case contains two hosts. One is the master node, the other is a minion. And the master node also includes the actual Node seller application. So how does this translate in the blueprint? So first of all, as we mentioned, the plug-in for Kubernetes is not part of, it's not built-in into Cloudify. Cloudify is a pure play orchestrator. It doesn't necessarily direct you to a specific technology or tool. So we had to actually write a plug-in for Kubernetes. The plug-in, the work on it was relatively short. This is just a moment. I'm just going to show it elsewhere, because I can't make the Wi-Fi work. So this is the actual definition of the plug-in. This is Cloudify YAML. Basically, the plug-in gives you the infrastructure of three types of building blocks. One is the master node. The other is a minion or just a regular node. And the third one is a microservice. So all of these three, they map directly to Kubernetes types and terms. So once you have this, writing the actual blueprint, which I just moved to, it becomes relatively easy, because most of the work is already done for you. So again, sorry for viewing it here. But I'm going over the way we map the nodes for Cloudify to orchestrate it. So what we have here is a master host, which is what we saw here, basically. And all of the mappings are done inside the blueprint. Basically, the relevant part for Kubernetes is the node seller node, which is defined as a microservice. You can define in there basically either the actual configuration of Kubernetes built-in inside the Cloudify blueprint, or you can use an external YAML file. Once you have this, the important part about what the plug-in actually gives you is the ability to integrate Cloudify with Kubernetes in the sense that we can map parameters from other services Cloudify is managing directly to Kubernetes. So in this case, we have the Node.js service, which is run by Kubernetes. And we wanted to use MongoDB, which is another microservice which is run by Cloudify. So for the Node.js service to be able to communicate with the Mongo one, we have to override some configuration parameters using, well, these two lines are basically what the plug-in knows to translate this and take them, in this case, the MongoDB IP and port, and provide them to Kubernetes. So once it brings up Node.js, it will know to connect to MongoDB. Let's see if the Wi-Fi happens to be back. Basically, I want to show the actual app. Let's see if it works. Right. Nothing. OK. So what we're going to do is, this is the way we can see the hosts on OpenStack. We have the MongoD hosts, the master hosts, the Minion hosts. And the master host also has the public IP. If I were able to connect to it right now, I'd show you that Kubernetes is run over there. We could see that the nodes, both the master node and the Minion node, are both online. And the actual pod for the nodes other app is run on the master host, in this case, because that's the way it was mapped in the Blueprint. So that was our demo. Any questions? Great. So what we just saw is how to manage containers on a hybrid environment, partly by Kubernetes, partly not. What we can also do is include in the Blueprint that Ranjas showed, additional lifecycle stages. For example, the monitoring metrics that need to be collected, Cloudify can then collect them, and automatically react to events happening in runtime, scaling, and healing based on those. So what you can really get is a full lifecycle management automation for each and every stage of the application. And that's it. So be sure to visit us in our booth. 48 can enter to win a drone. Yeah, that's it. Thanks. Thank you.