 Hello. Good morning. My name is Sharath Kumar. I'm currently working as a cloud engineer at Intel India. And my colleague Poonima Vayan also co-participant in the presentation. And today's our topic is introduction of FAS in the Starling Eggs platform. Let's get into the presentation. We are covering up a few points in the agenda. First one is the Starling Eggs introduction. What is Starling Eggs and the services offered by Starling Eggs? And the architecture of the Starling Eggs we'll discuss. And function as a service. What is that? And how it could be important in Starling Eggs? What is the need of FAS service in Starling Eggs platform? Then we look into some of the available projects as a FAS services are available right now in the open source world and their respective features. And finally, Chinlin, which is one of the FAS service available currently in OpenStack platform. And we can see how we can easily integrate or what will be one of the candidate which I am suggesting for Starling Eggs. We'll discuss these things in our presentation. Let's get into presentation. So Starling Eggs is an open source community project which is actually ready to deploy and scale and highly reliable edge infrastructure platform. So it is currently open source project and the companies like Windriver and Intel are actively participating or contributing to this project. So what exactly this offers? So a few points which I listed out like easily deployment right? Because this is a micro architecture platform underneath it using Kubernetes and some of the open source tools like OpenStack. And always user will find the difference or difficult when installing this platform on their on premises. So but in Starling Eggs, it's easy to deploy by using Ansible script and it is very low-tech manageability. That means while it is installing the interaction of users or admin are very minimal. And then once it's up and running as an application the events or the responses are quite fast. And after the application deployment when user actually using it in the production level then the auto recovery or something fault tolerance or something is goes wrong at the back end the recovery rate is quite fast. So these are the few added advantage of Starling Eggs which could be really meeting the requirement in the IoT or in IoT Internet of Thing world. So let's get into the some of the highlighted features or the offers from Starling Eggs to the community. The first one is configuration management where we can discuss like how configuration management is useful for the end users. So as name suggests whenever the new the service is already deployed Starling Eggs infrastructure and whenever new node or new machine will be added for SS scalable part then it could be auto discovered. And it comes up also a bulk upload of nodes for example you have many nodes you have to upload or you want to keep it as a part of Starling Eggs then you can use simple XML file. And host management where it will entirely take care infrastructure of backend host and if something goes wrong it will be automatically detected and understand the failures and it will try to recover it with a minimal timeline. And under the service management it offers like a high availability and it will keep monitoring monitoring the backend health of the infrastructures. In the software management functionality it is offering like if you want to upgrade the world version to new version so the admin or the manageable person no need to worry about that right. It will take it automatically update from one version to the next version and it will make sure underneath all individual components are upgrade and up to date. So no interaction of humans or in a very minimal manageability over here during the update or upgrade or software manageability for part. So under fault management it will try to get the active alarms or events and ensures those health issues or the infrastructure issues to be popped up or should be available on the dashboard so that admin or the users can easily understand what is going on at the backend. So these are the few highlighted points which Starling Eggs offers to the end users. So in terms of architecture we can see we have a underneath operating system which is preferably Ubuntu and this is the just architecture where Starling Eggs deployed and once on top of Ubuntu when Starling Eggs has been installed the first layer you can see all these are most of the open source components are from OpenStack world right and we can see they're like Horizon, Self, Keystone, ETCD these are all participating as a backend infrastructure for this Starling Eggs and Calico which is a Kubernetes driver and we know that Kubernetes is the default orchestrator we are using in Starling Eggs and Postgres and MySQL are the different databases we are using at the backend On top of that Starling Eggs provides all those five different services what we discussed in earlier slide and we can see once Starling Eggs is up and running we can understand like how it is flexible to up and run all those IoT use cases or low footprint applications or workloads we can easily deploy on Starling Eggs platform So this is the overall architecture about the Starling Eggs and we look forward to understand like fast So as we understand Starling Eggs is an edge computing platform and how fast our function as a service it is really making any added advantage or it gets any out of benefits for the end users who is using the Starling Eggs platform So let's first understand what is fast So fast is a conceptual or understanding of the running your functions on top of no servers serverless architecture so which aims abstracting server manageability with zero administration from the developer points of view like as a developer if you want to run any code definitely you need a server So in this case developer no need any provisioning of servers or managing those servers He just simply run his code on top of the function which is run and it will provide the output at the same time So it is a easy go for any developer who can quickly run his program any kind of program it could be a Python it could be a Go or it could be a .NET So all available run times are available in fast So which can be easily described or explained in the below diagram where the developer or set of developers are trying to execute their code by using HTTP or the front end which provides to execute their code in the function and the orchestrator which will be running at the back end so that user input will be taken care and it will execute on the one of the container because the fast also underground I mean underneath it is working on the Kubernetes platform only which number of containers are up and running and any user inputs will execute anytime as an individual functions So this diagram will clearly explains how it would be a high level overview of the function as a service So some few problem statements or we can see some reasons which will explain why fast will be a good candidate for Stalingex infrastructure So one of them are like Stalingex provides containerized base infrastructure for edge implementation in a scalable fashion in a scalable solution So in that case fast will really adding a low latency applications or low latency platform to the end users use cases like IoT because Stalingex majorly contributing for IoT or like you know some of the low footprint applications in the edge platform So definitely fast will provide all those minimal latency time to host the application on top of Stalingex And Stalingex is a platform where multi messaging paths are back end communication paths So that the active alarms or passive alarms will be detected in the very minimal time In this scenario fast will really be a good application or a module which will help to set the different communications for various functions which is running on top of Stalingex platform And Stalingex definitely provides for end developers who could be working on top of as an application developer or as a backend Stalingex developer who can quickly run his code build and ship any applications in a native cloud right with a much faster response time So this could be another advantage of as a developer they can use the fast for the development or for quick testing And in Stalingex function as a service is more efficient to provide you know seamless experience across the edge devices Because once Stalingex is deployed many small IoT devices are contributing or participating in that infrastructure They can communicate with each other as an application front by using fast service And the last but not least Stalingex is an application where it supports thousands of servers to minimal servers It can be support any number of servers When it is actually supporting large scale infrastructure like a data center or a very minimal servers for any industry use case or any IoT use cases Where the resources will matter a lot because in a constraint environment use case like using the effective resources in a minimal like you know RAM or memory It's a primary focus right So in that case fast will provide like you know multiple it can communicate with multiple edge locations so that we can significantly process the data or events in a minimal timeline So this could be one of the one of the area where fast will really contribute for a Stalingex use cases So I try to highlight a few of them according to my knowledge but definitely the use cases can be more than this But these are the few highlighted use cases from my point of view Available open source projects currently active in open source world right So some of the projects like Chin Ling which is already participated or it's already actively contributing in open stack and it is available right now for the users And Apache OpenWISC and vision and q-plus these are also competitive fast services available in open source world So by using this fast services what exactly we can achieve right So first and foremost is that you don't need any servers it's a serverless to run any kind of functions you don't need to depend on any servers that is the first point Second thing like while you're running your code you should ensure that you only run code not entire application because it's just a model by model you can execute not end to end application you can run on the fast service And third it is actually scalable scalable is the most important thing when you're running a heavy loaded applications it should ensure that it should process and get back the results so that backend infrastructure should be scalable And definitely it's a stateless right it is a MFM that means it is not holding any states it will execute that moment and return back the values so it's completely a stateless platform And then it is definitely in light in weight because it only runs the respective code and it is just a purpose of you know one time you can run and get back the results And you can use them as an event based trigger also because some of the examples in the open source code like AWS uses lambda where Azure functions will be available in Microsoft platform These are all event based so that user can run the function on respective event trigger So that could be one of the best use cases where specific functionalities only you can use fast without any platform to be set for them Without any servers to be managed so it saves their cost also right and it is supporting multiple run levels so programmer or the developer known it to bother about what back ends he need to be run for his program to run So fast will provide all those you'll take care all the backend infrastructures Quickly get into the understanding about why I choose chinlin for fast under the styling x infrastructure So few points to be highlighted here like chinlin is actually available open source product under open stack and both chinlin and styling x underneath users styling x as a backend framework So definitely the implementation will be much much easier in terms of you know comparing the both platforms the integration will be much easier So because chinlin uses and on its backend like a Kubernetes orchestration as well as styling x is also uses Kubernetes so it will be easiest for any integration it could be much easier if it is already proved its availability in that platform And chinlin has a rich set of API calls where developer can use them any runtime or you can execute those functions with you know any API calls right you know multiple API calls supported at the backend for for chinlin And then as I said chinlin is already integrated with open stack and it is easy to you know integrated multiple other components of open stack like although for alarms or Zacker for messaging or Swift which is a well known object as a storage right So if it integrates already with open stack then for a styling x also it could be easiest for implementing chinlin at the backend And chinlin supports docker images and docker run line runtime so it is also easy for styling x because styling x also uses Kubernetes come docker nut as a pod and Swift as a backend storage for glance for storing the images and objects right So it could be easy for you know calling out chinlin for already implemented solutions and the good part is that synchronous and asynchronous execution of functions right that means user can run the function and he can immediately get the result or he can just go with the asynchronous mode also So it's just a way of implementing your function and expecting the results right and the good part is that horizontal scale up and scale down based on the load if function is a bigger piece of code where it requires a lot of resources then chinlin will take care that underneath hardware resources Ensures that the execution of function would yield the results with the scaling if the scaling the backend resources without knowing the end users so the scale up and scale down will really perform slot when you're actually running in the production for distributed applications right And last but not least chinlin supports opensack control command line interface also so user have a two mode of instructions two mode of executions right one is a GUI level or another one is command line executions so it supports both GUI and command line so it always helps for users to you know choose their applicable mode of executions So these are the few points which I tried to highlight here which will bring into the table by chinlin So in this entire presentation I tried to give overview about the styling x and as a function as a service which can really contribute a lot of additional advantage to the edge platform