 Hello, everyone. I'm Chandan from Juniper Networks. I work with OpenStack as a firewall service developer. And today, I'm going to discuss about OpenStack as an application platform. So we all know that OpenStack has created this great platform for infrastructure as a service. If you ask any user of OpenStack or any administrator, he will tell you about how great OpenStack is in providing all the cool features in the infrastructure domain. It has really helped users to move on to a paradigm when they can look at infrastructure as APIs, rather than thinking about acquiring services like new servers, storage. They can currently start looking at API calls, and that problem has been solved. So from the point of view of OpenStack, it has already provided the end user with a great infrastructure as a service platform. So if you want to look at OpenStack, you can differentiate with a conventional platform, like a hardware platform. The most striking difference that OpenStack brings to the platform is like the elasticity of your infrastructure. As an end user, you can ask for additional resources when your demands grow. And as and when your demands come down, you can ask for the resources to be taken back. So this is a great and powerful feature OpenStack provides. But the problem with OpenStack has been that it has focused too much on the infrastructure. So what I'm going to try to tell today is about how all these powerful features of OpenStack can be used as an advantage by an application developer. So first of all, let's look at how the application paradigm is different from infrastructure. So if you look at applications, the applications are more tuned towards thinking like what kind of resources these applications will have. They will think in terms of memory, CPU usage, storage, network bandwidth allocation. These are the things that affect the running of an application, while all of these features are actually controlled by the infrastructure provider. Just like OpenStack, OpenStack can provide you APIs for controlling all this infrastructure. So we see there is a gap between what the application is actually wanting to use and what the OpenStack is providing. Although both of them have the capability of one guy having the infrastructure. He is able to provide you the infrastructure based on API calls. And the application wanting to use those infrastructure pieces, but they are not actually communicating with each other. So the focus of my talk today is how do I make my application take advantage of this infrastructure pieces dynamically? So to give you some example, let's say we have some application that we want to run on OpenStack. And that application is able to dynamically ask for resources. Or the application is able to provide statistics of its usage or the kind of bottleneck it is facing. Directly, it can provide that kind of information and submit it to OpenStack. And from the OpenStack point of view, it can look at the application's usage of the resources and come up with a better strategy of deploying those applications. So it's a win-win for both the situation. So let's move on to the next part of it. So here I'm talking about thinking from an application's developer's point of view. So move to a more focused application-centric thinking. And let the application developer itself request you for more resources or request you for resources that the application finds is lacking. Instead of a third party monitoring the application as a process and asking for those resources. The other things that the infrastructure can help the application developer with is the discovery of applications. So most of the applications in today's world are becoming more and more complicated. They have multiple processes running. And most of the time, these processes needs to discover each other. So the way the infrastructure provider can help this is if the infrastructure itself is able to keep track of what kind of resources the infrastructure is running. And when a new application comes to this infrastructure, he can query this registry of information and get to know all the other applications that are running on, say, OpenStack. The other kind of situation may be creation of clusters. So there are a lot of situations where an application can have replicated instances. Or application itself is made out of smaller components. So all these components can be part of a group of application. And it can behave like a single entity. And the infrastructure itself can help in creating that group of application. And in such a complicated environment of application, it is always a problem to configure the application properly. So in this complex interconnectivity of application, this infrastructure is one piece that can see all this application and provide a seed to do the configuration correctly. To accomplish all of this, the first step that we need to do is to look at the application as a resource itself. So today in OpenStack, we look at VMs as a resource or a container as a resource to help the application developer or to understand the requirement of the application. First, we have to model the application as an OpenStack resource. So and the definition of application itself can be very fluidic, because application can be one instance of a process running on one server. Or it can be multiple processes which communicate with each other. And the whole group of process forms an application. So the other thing that is required is this applications be configured through some kind of an API. Now OpenStack is well known for providing REST-based APIs. And we can think of the application APIs also as a REST-based endpoint. And it can obviously keep track of the applications that are running as a registry. From the OpenStack point of view, till today, if you look at OpenStack, the maximum granularity it can look at is maybe a VM or a container. But going forward, if you make application as a resource, then the application developer itself will be able to provide you information about what kind of applications are running on OpenStack. And from the cloud provider point of view, you have a better control and understanding of what kind of workloads are being run on your platform. So let's look at the use case of dynamic resource allocation. So as I told, one of the major advantages of using a cloud-based platform as your infrastructure is the elasticity of the platform. As the demand increases, you get to acquire more resources. And when the demand goes down, you can revert it back. So how do I, as an application developer, take advantage of this? So if we have an API which is able to track this application, that API can also take input from the application and understand the needs of the application in terms of, say, extra storage or bandwidth or policy in terms of security to be applied. And then there can be a two-way communication between the infrastructure and the application. If you contrast that with today's situation, the applications are actually started with a static environment. And then the applications are totally on the mercy of the environment as to what kind of resources are available to it. And finally, if you want to do a scale-up of the application, you have to depend on an external agent which has limited insight into the kind of activity the application itself is doing. So compare it with a situation where the application developer himself has built in this intelligence into the application to provide fine-grained data about how resource utilization is going on. So one of the core principles of getting into this kind of a situation is to do a self-monitoring by the application. So self-monitoring data by application will form the basis of all of this dynamic resource allocation or monitoring and triggering of scale-up of application or scale-down of application. Similarly, on the dynamic scale-down, the dynamic scale-down is actually a bit of a tricky problem because you have to actually look at the data over a period of time to understand when actually the application data is, I mean, the demand is, you are expecting the demand to come down because there is no just like in case of scaling up where you look at the demand is going up and immediately you can trigger a scale-up, there is no direct way of telling that at this point of time just because I saw a little bit of demand going down, I should scale down my infrastructure. Instead, what you have to look at is a lot of historical data and then come up with some trends that during this period of the day and during this time of the week, we will see a little less resource consumption or a little less demand for my service. And depending on the trend, I can scale down my infrastructure. So the application infrastructure can actually help in generating those trends looking at the self-monitoring data that the application would have provided. Now, I have talked about dynamic allocation of resources directly by the application, so it has both sides to it because when you cannot just ask, I mean, as an application developer, you cannot just ask for any amount of resources. There has to be some limit, so one of the ways to restrict the amount of data that or restrict the amount of resources that can be asked by application is by, say, writing a policy or giving certain amount of quota to an application. And depending on the quota, you can ask for a certain amount of resource. But if your resource requirement are going beyond that quota, maybe the best way to deal with it is to create a log in the application monitoring, self-monitoring. And that log can act as a means of tuning your quota in the future allocations. So depending on what we have talked till now about application being a resource and creating statistics of the application or self-monitoring of the application and a requirement for dynamic allocation of resources directly by the application, we can look at some simple APIs that will be required to get to that point. So I have put down certain APIs here. We have the application instance creation, which is the first API. And then I have the stats. So stats is like self-monitoring data that the application can gather and put it to OpenStack. And then we have dynamic resource allocation. So this is another REST API that can be used to allocate or ask for more resources. And finally, we have a quota API that API will be admin only. And the admin can decide on what kind of quota to provide to the application. Now one of the interesting thing about this API is how do we make this API accessible? So to make this API accessible, we don't want to put any kind of special requirement on the application. So first of all, it has to work for any and all kind of application. And secondly, it should work in ideally all kind of environment. And the third thing to keep in mind is this application-specific APIs is going to be used by the applications. It's not for the end user. So mostly it will be restricted to, say, access through the VM. So depending on this use case, what I try to do is I try to put a small VOC kind of demo. And then to restrict this APIs to the VM itself, I have extended the metadata agent to allow this application APIs to be called. So the additional benefit of doing it over metadata agent is, like, first of all, the metadata agent endpoint is available to all the applications running within any instance. We don't need any external network to be connected to the VM so that the rest calls can be made directly over the well-known metadata endpoint of 169.254, 169.254. And anyway, it restricts the usage of this API to the VM instances. I have a small demo here. So the demo starts with creating a simple instance. So this is well-known to everyone. Just to show that we don't have any special requirement. And anyway, the API calls are based on REST APIs. And metadata endpoint should be available to virtually all instances on OpenStack. Although I have enabled SSH, this is just for the purpose of demo and to show you that the REST APIs can be called. So this is the API that I was talking about for creating the application. So if you look at it, you'll see that I am using the endpoint that is used by metadata agent. And the metadata service has been extended to understand this application APIs. And to start with, we start with a blank application. We have no stats. Currently, I'm using the name of the application as the identifier. And we have some quota config and resources, which can be tuned later on. So one of the things that I would like to point out here is the definition of application is very dynamic, actually. So you can have one application spread across multiple instances, or you have multiple applications on a single instance. So in the later slides, we'll look at how to manage the identity of an application, and how the identity can survive beyond the reboot or things like that. So here you have the service, I mean, OpenStack managing a list of applications. And you can obviously think of it as a registry. And this kind of registry can always help the tenant to discover other services that might be running in the OpenStack platform. So here, this is the API that we are using to ask for resources to create, to ask for dynamic resources for an application. We are basically doing a post request on this endpoint to let the infrastructure know that this application is asking for resource one with a certain kind of value. So the whole purpose of this demo was just to demonstrate that the internal API based on extension of this metadata service can work as a conduit for the application to work along with OpenStack and take advantage of the elasticity of OpenStack-like platform and derive the maximum advantage for application developer. So here, we can see in the other use case also where the application can interact with other components and share configurations across a cluster or a group of application. Coming back to the presentation, so if you looked at that demo, you can see that this is the way the application is communicating with other infrastructure pieces. It is using the metadata API endpoint. And the metadata API endpoint has been extended to handle application in fra request. So things like monitoring or submission of self-monitoring statistics or the application asking for extra resources will be handled by this app intra handler. And depending on the kind of request, it can either go into a database which helps OpenStack to actually get a better view of what kind of workloads are being run on the infrastructure or it can be an infrastructure API call itself which is actually being proxied by this app intra handler. Now look at this REST call a bit more closely. You will see that there are a couple of headers that were part of this REST call. One was two of them are well known. I mean they are already part of the request that metadata client, like a cloud in it or something like that, does. So it has the tenant ID and the instance ID of the instance where this application will be running. But the third one is something that we added. This is to make sure that we identify the application as to which of the application on an instance is actually making this request. So the password can be something that can be kind of a shared secret between the infrastructure and the application. And the way to pass on this information to the application can be maybe through the metadata agent itself. And it can be one of the key value pair that gets passed on to the instance. And then the application can pick up this shared secret. And start creating its own identity. The second part is more of the second example. The second example here is about resource allocation, although I just showed the post method on this endpoint. So we have, I haven't actually worked on the handler. So what ideally should happen is once you ask for this resource from the application infra, the infra has to validate if this resource is well within your quota and then use this request to make a call using something like Heat or other client and make those resources available to the instance where this application is running. And then it's up to the application to expand its usage of resource. Now, one of the problems that we saw with multiple application in an instance or one application spread across multiple instances is how do we identify that it is a single application or multiple applications are running on a single server? So there are a couple of things that can help in termining the application identity. One is the location of the application, the owner of the instance where the application is running. And of course, we have a shared secret between the infrastructure and the application that can be passed over metadata agent, which can help in creating a unique identity for the application. A few things to keep in mind in case of application identity is like application identity can be challenged when you have replication of instances for applications. Suppose you have VM which you cloned and started another copy of it, then you have the same application running on both the instances. Or if you try to move your application from one VM to another, or you have reboot kind of a situation where you have rebooted the instance, and then the application has to reclaim its identity. So all these are challenges that needs to be solved. The implication of not providing a strict application identity may be something like security, security concern, like because you have an application tied to a list of dynamic resources. So if an impersonator is actually able to get access to your application identity, then he can use those resources. And similarly, if you're using this application intrad to push configuration and configure other peers of this application, then you will be able to actually influence the configuration of other instances of application. So this is where the base use cases that I saw for creating an application infrastructure or application API for OpenStack. But beyond this basic use cases, we can look at some advanced cases. So configuring application is one of those cases where an infrastructure which provides provides a kind of repository of configuration that can be shared by multiple instances can be useful. And it can be the single source of truth for a list of application. And an extended use case for that is if you want to change the configuration of your application across multiple nodes, you can use this infrastructure in such a way that the infrastructure is able to give a notification that certain configuration needs to change for all this list of application. And then the application needs to handle those kind of notification and reload itself. Another use case can be a multi-node application. So multi-node application can mostly be kind of a cluster. So clusters itself can be of two types. One is like multiple instances of the same application running on different nodes for load sharing. Or it's like a role-based cluster where you have multiple instances of the application and each of them actually have a different purpose. So both this situation can be helped if we support something like an application group. So application group is another construct that can be part of this application infra. And the way we do it, at least the way I plan to do it is like create a group within the application infra API. And then various application can join this group and share configuration among each other. Again, joining this group needs to be authorized and authenticated. There has to be a group secret that needs to be shared among the group members. And that secret will actually give you access to both the configuration and resource allocation for a group. So for a load-balanced application, actually application self-monitoring can provide a lot of insight into what kind of resource utilization or demand that application is currently facing. And over time a load balancer can look at the data that has been provided by the application and it can do things like pulling out application instance from a group of load-balanced nodes. And then we can also think of using this application data to schedule the application or the instances. In such a way that it adheres to the policy of high availability of say not starting the instance of application on a single node or in a single region. So here are some APIs that can help with this extended use cases. First one is for storing the config of application. And the second one as I told us about creation of group and then participating in this group. So to create a group we can create a group with a group ID and then have a shared secret that gives access to all the instances to be part of this group. And the group can have special configuration which will be shared by group members. And in a clustering kind of environment this group can be a representation of a cluster. So in summary what I'm proposing is an application specific API. The advantage of this API is it opens up the infrastructure's capabilities, especially the dynamic scaling and elasticity of the infrastructure which cloud based environment can provide to the application developer itself. The developer can decide on the monitoring, the key points to monitor for a particular type of application because if you look at the current situation you, the way applications get monitored is a very generic way. You look at the attributes of a process and come to a conclusion about the application. It is always going to be a better proposal to actually let the application come up with come up with certain attributes that can give a better indication of how loaded or how much the application is healthy. And we talked about application groups for creating clusters and how finally this application APIs can provide open stack with final green understanding of what is going on in the infrastructure instead of just looking at a VM level or at a container level to understand the load, open stack like infrastructure can now understand the workload at an application layer. So that's all I wanted to present in today's talk. So if you have any questions I can take the questions. Sure. You have to work Kubernetes in it, I agree with you. You would hardly think that the different moment you think you were at KUKA. Some sessions are exclusively devoted to Kubernetes. What are the advantages to the approach for application management that you were proposing versus an open source orchestrator like Kubernetes or like anything else that's warm or mason. So it goes back the same difference. Like if you want to monitor an application from an external point of view, maybe an external agent can work for a lot of application. It is very generic. It can look at the attributes of a process and come up with some value for how loaded that application is or how much the resource utilization of that application is. But if you compare it to a very specific kind of matrix that application designer has thought will be influencing the performance of application, it is definitely going to be a better metric to look at when you want to say look at the performance of application and how much healthy it is in an environment. Yeah. This is not, I mean, you can look at it at a different point. One way of looking at it is, yeah, if you are using an agent, you are kind of handing it off to a third party who might be specialized in monitoring and who knows a lot about monitoring of application. But if you want to squeeze out the maximum out of your platform and use elastic platform to your advantage, then giving the application developer that opportunity to actually communicate directly with the infrastructure, I think it gives the application developer a lot of power. Any other questions? Okay, so unless anyone has any questions, I am mostly done with it. Actually, I was supposed to present this along with two of my co-presenters. Unfortunately, they could not make it to make the summit. They are Sridham and Sharath and I am Chandan. We are all from Juniper Networks and currently I am involved in Firewall as a service project in OpenStack Mutual. Thank you.