 Hi everybody, good day. Welcome to join StarLex project overview introduction. My name is Yang Hu, and I am from StarLex team at Intel. Well, let's get it started. Here is the agenda for today. Firstly, we will have a quick glance and what StarLex is about. Secondly, we could look into a bit more in-depth with the high-level architecture diagram. Third, we would like to call out some features highlighted on StarLex. Then we will quickly go through StarLex projects roadmap along our development journey in past years. Last but not least, we will show you all the contributors in our community. Okay, let's have a quick look at what StarLex is from this page. First, when you access StarLex website at gdps.starlex.io, you will see the introduction as such. There are a lot of words, but with that, we could capture a few keywords like cloud infrastructure, for age, no latency, and so on. In summary, we could describe StarLex with following five bullets. Number one, StarLex is a top-level project approved by OSF, Oppostec Foundation in June 2020. Two, StarLex is a hardened Kubernetes implementation, which was certified by CNCF. And it's also an Oppostec instance, which is optional to deploy based on your needs. Three, StarLex can manage containers, virtual machines, experimenters, for tenants, and users. Four, StarLex provides a set of DevOps tools to manage classed hardware, mostly x86 servers, OS operating system, and cloud infrastructure. Five, StarLex is optimized solution on deterministic latency and performance for age computing. Keeping the general description in mind previously, we see. So let's move on having a look at the high-level architecture on this page, on such a software stack, layer by layer, bottom up. On top of 86 servers, we are having an next distribution. More specifically, StarLex integrates send OS with either real-time kernel or standard kernel. Going up to the user space, we integrate a couple of dozens of building blocks from upstream communities, showing in orange blocks. These building blocks are running on top of send OS host, which we call it distribution layer. Essentially, they are providing all kinds of services in order to lay the foundation for cloud infrastructure. For example, IPMY will work with server BMC modules to provide very known-level hardware and software manageability. Ceph is deployed and serves storage for Kubernetes and OpenStack in the back end. Etcd is the cornerstone for Kubernetes control plan. Docker is fundamental part for any cloud native infrastructure, no doubt. All the building blocks are playing the key role on StarLex. Above, we just named a few. Moving on, you will notice there are a bucket of blocks in purple, host management, fault management, software management, configuration management, and service management. We call them flock services, and they are core modules delicately contributed by StarLex community, other than utilized from other open source projects. For the details of these flock services, you can check out StarLex online documents. As well, on this diagram, there are other two subsystems in purple named infrastructure orchestration and distribution edge cloud. Infrastructure orchestration implements ETSI NFV VIM interfaces by wrapping OpenStack or Kubernetes client APIs. And distributed edge cloud is a solution to fulfill the needs of edge cluster which could exist geographically distributed. When the base on the distribution layer and the flock service layer, we are ready to boost a Kubernetes cluster. So, here we are seeing the blocks in blue, the core part of course is Kubernetes. And there are also a few other components which are leveraged to facilitate container images and applications. Up to this stage, StarLex is having everything in terms of constructing a cloud infrastructure. As said, if there is a need, OpenStack services can be deployed with StarLex by means of containerization. In another word, from Kubernetes point of view, OpenStack services are just a bucket of deployments, ports, services, etc. etc. In summary, this diagram presents how StarLex is integrated layer by layer, conceptually, hope it's helpful of giving you a whole picture about StarLex architecture. On this page, we listed a few highlights or features on StarLex project. First of all, StarLex is a validated infrastructure. In StarLex, both containers and virtual machines are supported. From storage perspective, self-cluster is deployed as the vague end for Kubernetes, PV, and OpenStack services such as Glance, Cinder, and Swift. From hardware perspective, we also had inter-1-gig, 10-gig-NIC, TSN-NICs, and several other NICs integrated and validated. Also, inter-FBGA and QAD cards are integrated as part of the package. Secondly, for the edge computing use case, no latency requirements, StarLex takes advantage of primitive Linux kernel, KVM, and edge end-to-end tuning even include TSN integration. Moreover, StarLex has high availability more than Kubernetes and OpenStack. Thanks to those features, like active standby controllers, synchronized states by multiple layers like DRVD, service management, and process monitoring, and a few other features, like such as online patch software upgrade and back-end and restore. All these make StarLex a more reliable, scalable, and high availability infrastructure. From previous pages, we have already known what StarLex is and its architecture. Now, let's see how it's deployed in place practically. With StarLex, we have three deployment options to fit different requirements in terms of resource demounting, capacity, scalability, and high availability. The first choice is simplex AIO all-in-one on a single server on which there are all three logical roles we also call them personalities, control, storage, and computer worker. This deployment is the simplest way of having a try with StarLex to know how it works and what it can do, but it's lack of the capacity of scaling and high availability. The second way of deployment is called duplex AIO on two servers with the active standby redundant design. We could gain the merits of the high availability, while the actual computing capacity is still very limited because there are only two servers and on each server, all three roles are up and running. The last one is a standard solution. Multiple servers are placed in each place different roles, delegate roles, control, storage, or computer worker. In such a deployment, we have two controllers with active standby pair by design, a cup of storage nodes, and up to 100 computer workers. This multi-node deployment is preferable in the real production environment because it has a lot of advantages from all aspects. So these are options we have and you can choose one who then based on your actual needs. This page I will talk about our journey along with StarLex project. Taking the metaphor, we put this project in three phrases, grow, work, and run. First of all, similar to other projects on the OSF, we follow the rhythm of release two versions every year. We kicked off this project in 2018 earlier and we made it join Oppositec Foundation and May Vancouver Summit. Later, 2018, in October, the community made the release 1.0. Moving forward to 2019, we made a leap on the architecture. Reconstruct StarLex from the design with Oppositec directly installed on the center OS to a design around the Kubernetes. With such a huge change, we released 2.0 in September and 3.0 in December with further polishes. In 2020, we made 4.0 with cut container integrate, keeping up with Oppositec Ussuri and a few other industrial IoT features such as TSN, time-sensitive network support, as I mentioned previously. And going ahead by this year, we will have another release 5.0 with containerized self-cluster by Rook operator, as well a new personality for the edge worker node which could have previously installed the next distribution OS around it. Going through three years, we are ready to step up to a wrong stage in the next and following years. There, we are going to aim at the ecosystem development with our partners in all edge computing segments. Because we are a part of the opening for a family, StarLex community always promotes four opens. First of all, all projects in StarLex are fully open-sourced. Secondly, StarLex is with open design, as we described on previous architecture page. You can also get more information from the online deck. Then, the development process is host on Garage and open for everybody to review and comment. All in all, StarLex is an open community and we welcome all kinds of contributions by all means. On this page, here are all the community partners, there are contributors or donors. Here, on behalf of the community, I would like to say thank you for everything you have done for these projects. Last but not least, again, welcome join us. Then, let's move to the Q&A session. Thank you.