 Hello everyone, welcome to OpenInfoLive. This is a brand new series. It's brought to you by the Open Infrastructure Foundation. We are coming to you live every Thursday at 1400 UTC. The one hour long episodes are featuring production case studies, demos, conversations with industry experts about all the recent and latest and greatest hot topics and updates from the global Open Infrastructure community. And if you would like to be featured in one of these episodes, then please throw your ideas into ideas.openinfra.live. So my name is Eliko Vance. I'm with the Open Infrastructure Foundation, and I will be your host for today's show. So let me start with a quick reminder. These shows are not just live, but they are also interactive. So we need you to participate. This means that you can throw your thoughts and questions into the chat of any platform where you are following us like YouTube, LinkedIn, or Facebook. And we will do our best to get to your questions at the end of the episode where we have some time reserved for questions and also reflect to your thoughts and give us any feedback if you have some during your show. So make sure that you don't forget to participate and be part of our show today. And before we are deep diving into our topic for this episode, let me shout out to last week's one because we are kind of going along those lines a tiny little bit. So last week we had industry experts Martin Casado, Bruce Davy, and Amar Badmanaban joining Jonathan Brice and Mark Collier to talk about how to bring connectivity and provide connectivity all around the globe and what the challenges and opportunities are there, including talking about open source technologies such as Magma or O-RAN, or Open Radio Access Network, if you're not really a big fan of acronyms and OpenStack. If you happened to miss last week's episode, not to worry, we do record all these. So you can go and check out all past episodes on our YouTube channel and you can also get back to this one if you want to re-watch it later. So connectivity. We will not talk about connectivity today, but we will look into what happens when you do have it available. So drumroll and all excitement, we will talk about Edge, but not just the Edge itself. So Edge Computing has been a really hot topic in the past, well, at least three, four years and at least in my experience, it is still a bit of a confusing and challenging space because we are still only just scratching the surface. And when it comes to challenges, the whole thing starts with Edge itself as a definition because there is still no one definition that rules them all. So why that might be? If you look into Edge Computing use cases, there are a few ones which are running in production already, but the number of potential use cases are growing rapidly. And with the new use cases, they are also bringing in new industries. So while it started with telecommunication, now we have automotive, healthcare, industrial, agriculture and really anything that you can imagine. And because of this, especially if you look into some of the ones at the end of the list, you wouldn't have really connected them to IT and technology just a few years, maybe even just a few months back, like agriculture, for instance. So this shows the diversity of use cases in Edge Computing and with this diversity, there also comes diversity in requirements. So this means that when I say requirements, then you probably think of things like latency or security, then with remote management and then bringing that to the next level, zero touch management. But even these requirements that pop into most people's minds, they are just different enough for each use case that you can't really say that the edges in those use cases are the same and they have the same requirements even if the words themselves match. So because of this, when you talk about Edge, you cannot really assume that the other person has the same edge in mind as you do when you're talking to a new person in your company or a new group of people. So you always need to define your Edge and give context and then also listen to other people's perspectives just to really make sure that everyone knows what you're talking about, when you're talking about Edge. And this was just the definition and this was just focusing strictly on the Edge. But Edge, if you look at the word itself, it is the edge of something. So the Edge is the device, the small or micro data center, but it's not alone, it's not just in itself. It is always connected to your regional data center, central data center and in some use cases to other edges. And this results in a spider web of massively distributed systems. So this kind of brings back something that we knew already, like distributed systems, but elevates it to a whole new level with the massive scale that many of the Edge computing use cases have. And because of that, we have some new challenges in the areas of testing, deployment, management and orchestration. And we also have some challenges and requirements like interoperability, which was sort of solved already, but they are back into the spotlight. So as you can see, there are challenges, conundrums, some confusions and misconceptions in this space. So how do we untangle the Edge? This is something that the opening for Edge computing group is looking into. So in the next segment today, with my co-speaker, Gagya Chottari from Nokia, who is also my co-conspirator in the Edge computing group, we will talk about the group, what it is, what it does. And with that, we can just go to the next slide to jump quickly into the introduction part. So this group is a top-level working group that is supported by the Open Infrastructure Foundation. And this top-level ability also means that we are really not exclusive to any technology when it comes to Edge, but we are looking into these massive distributed systems and infrastructure to build your Edge with and your end-to-end Edge solution, and not just the Edge or Far Edge or very, very Far Edge itself. So our group is looking into solutions and software components in the open-source ecosystem for the infrastructure as a service layer. And to understand better the requirements, we're collecting all these use cases from all the industry segments and more that I mentioned already, really trying to understand the requirements of the Edge computing space and what demands it puts on infrastructure. And with the use cases and requirements that we collected already, we are building reference architectural models and sharing that with the broad industry, not just the open-source ecosystem, but really everyone who's interested in these information. We are collaborating with adjacent groups and communities and also trying to bring these infrastructure models to life, implement them with projects and services from open-source projects like OpenStack and Kubernetes, really trying to figure out how you can create the best infrastructure for your workloads. You can find out more about the group on our Wiki page. We also published two white papers that talks about our view on Edge and also our work in the reference architecture space and all the relevant testing and other activities. And coming to that, I will give the word now to Gehge and he will guide you through the architectures and some of our activities. Thank you, Erdiko. And let's go to the next slide to see the architectures, what we are discussing in the Edge Computing Group. If you can... Oh, thank you very much. So in this group, we realize that there is no one-size-fits-all solution for Edge Computing, but still we recognize some potential in how you can architect your Edge Computing infrastructure and we created these two, let's say, ideal architecture models which are the centralized control plane and the distributed control plane. I will discuss them a bit later. But as Erdiko described previously, every use case basically requires a different Edge cloud infrastructure because the different workloads have different environments and the different use cases require different behavior from the cloud in case of, for example, connectivity loss or in case of any unwanted event. Also, we are looking into infrastructures which are growing organically, so we are looking into scalable infrastructures because Edge is basically a system which is built from cloud infrastructure, so it's somehow distributed architecture or distributed architecture. We think that connectivity should be in our focus and we are distinguishing the behavior of these Edge cloud infrastructures based on how do they behave when connectivity of the Edge cloud instance is lost to the other parts of the infrastructure. In case of the centralized control plane, we have in one location all the control functions of the infrastructure and the Edge data centers are running only those components which are needed to run the workloads but all control functions are running in one central data center. This solution is not capable to manage the lifecycle of workloads in case of connectivity loss between the Edge data centers and the central data centers, but on the other side there is no need for synchronization of any method data between the Edge and the central data centers because the central control plane ensures that all required data to manage the whole infrastructure is consistent and in one place. On the other side, this thing with this control plane runs all the control plane functions in every Edge data center which means that there is an overhead in the control plane in every data centers, but the benefit of this architecture is in case of connectivity loss, the data center is still able to run all the management functions and manage all the lifecycle management events of the workloads. In this sense, these Edge data centers are autonomous and they carry on the whole functionality of an Edge data center on their own. The problem here is that to provide a consistent view on the state of the whole system, there is a need for some kind of central management and data consideration functionality and some kind of a federation which provides a single point of access to the whole infrastructure. We know that these are very idealistic architectures. This is why we call them reference architectures and the solutions are somehow mixing these. Some components are implementing the centralized control plane approach and some other components are implementing the distributed control plane. For example, if we are looking for, for example, around the OpenStack, 3.0 implements the centralized control plane model while studying what will be covered later in this session is implementing the distributed control plane approach of the reference architectures. If you would like to read more about these reference architectures and if you would like to see one example implementation of these on paper using different OpenStack components then please look at the URL which is shared in the slide and there you can see a detailed comparison between these two architectures. Okay, let's move to the next slide where I will discuss a bit about other things what we are doing in the Edge computing group. So basically what we think the role of the group is to connect the ecosystem so we do think this group has an open collaboration forum for all groups and persons who are interested about Edge and in the Edge computing group sessions anybody can bring their topic and we are happy to discuss and share our experiences from our company or industry and we saw very interesting discussions where representatives from different industries were sharing the same problems basically. Currently what we are trying to do is to untangle the challenges of the Edge computing ecosystem which is what we plan to be a set of blog posts what we would draft together and we figure out the topics together and currently we have the topics around the no one size fits all definitions so the exact issue what I just described in the previous slide and we are planning some discussions and blog posts around how 5G and Edge are not the same thing and also we are trying to cover network transport for Edge and also we are looking for other topics so this is as I mentioned like an open discussion forum where we are discovering problems and solutions for Edge and we try to somehow distill the conclusions into these blog posts we are very excited to work on the first blog post we are currently collecting these blog post ideas in an Etherpad and working on separate blog posts on different Etherbes what we hope from these is that we are accelerating the collaboration between the different groups around Edge and we can consolidate the different ideas and different projects around Edge by opening up a discussion forum and somehow providing a public opening and sharing platform about problems and solutions of Edge so if you are interested in all of these then please join us we are meeting on every Monday 6 a.m. PDT you can see the wiki pages of the Edge company group in the main wiki page you can see all the resources and all the information about the group and we have one meeting list and of course we are maintaining an ILC channel which is Edge company group on the Fresno channel I don't know if I missed anything no, I think it was a great overview thanks Gehrge and I would like to just kind of emphasize on participation and collaboration so as Gehrge said please join us on the weekly meetings or our main list and bring your use cases architecture models, your feedback if you agree with the ones that we have already or you have one that doesn't fit into any of the models that we have and when it comes to participation let me remind you here that this is an interactive session so if you could join us live today please throw your thoughts and questions into the chat so we can get to them and answer at the end of this episode we are ready for our next segment today which will be about Starting X and this is an open source cloud platform fine tuned and optimized for Edge and IoT use cases and Starting X is implementing the distributed control plane model so now you can see how this actually works and how it looks like in practice and we have invited contributors from the community to introduce the project to you and guide you through the latest features of the 5.0 release that the community is just putting out as we speak and give you a bit of a sneak peek into the project roadmap after 5.0 so we have Greg Waynes and Matt Peters from Wind River and Minyuan Chi from Intel so with that, Matt, if you are ready the floor is yours all right, thank you very much Eligo so hello everyone, my name is Matt Peters and I'm a software architect with Wind River and a contributing member for Starting X so I'd like to start us off today if I can with just an overview of Starting X I know not everyone may be aware of everything that Starting X is providing to the OpenStack, sorry, into the OpenInpr foundation so Starting X itself is an open source integration project under the foundation it offers a fully integrated software stack for a collection of open source projects but mostly targeting the Kubernetes platform with the support for containerized work clothes and support for virtual machines which is enabled through the deployment of OpenStack so it's specifically tailored to the edge use case and we do target a lot of the requirements for that edge use case so with that we really have an emphasis on reliability so a focus on 5.9s or better and we do that across all of the the platform infrastructure management, monitoring all the lifecycle operations so as you'll see later in the presentation I'll talk a little bit about how we do that across the distributed cloud and some of the orchestration of those operations we have a specific emphasis on scalability so this actually goes both directions we have a look at how we can reduce the footprint for the Starting X deployment to be able to fit on lower cost edge server hardware but we also have the ability to scale up to a multi-node system and even a distributed system so we really have an emphasis trying to make sure that the distributed control plane that we have has the smallest footprint possible while still meeting all the requirements for these edge use cases and those applications because we do deploy at the edge we do have an emphasis on security so all the communications between any of the decentralized components and the distributed systems are all encrypted we have full certificate management across the system enabling both the lifecycle management of those certificates but also just the awareness that those are present and be able to manage those from an end user so we are a platform so we do open it up to being able to integrate with other PKI solutions and then finally but not least is the ultra low latency so we do have a specific profile that's tailored to real-time applications and we really want to make sure that we meet the demands for that and specifically you'll see that in some of the 5G use cases for something like the virtual ran where they have strict requirements for some of the timing infrastructure as well as the real-time scheduling Starlinex itself can deploy both standalone so that would be a standalone cluster or it can be deployed as a set of distributed clusters and I'll take you through more detail about that later so next up I'd like to go over some of the use cases we're seeing in the industry and as you can see there's a whole landscape of different industries contributing both applications and services to the overall solution for the edge and what we're seeing is most of these are being enabled by the introduction of 5G but it's not strictly geared toward that but a lot of it is being enabled from that just from the flexibility you get and the availability you get for network connectivity so the NFE edge infrastructure is really where the 5G landscape fits in and we're seeing the key infrastructure for that at least in early deployments on the VRAN side but really this is expanding rapidly and we're seeing adoption across many industries of being able to support many tier 1 providers as well on the autonomous devices we see definitely a demand and an expectation from users and industries for autonomous vehicles autonomous devices such as drones both for industrial and for military use cases so there's definitely an expectation and a trend in the industry to deploy and manage those systems at the edge and I think there's been a lot to talk also around some of the immersive experiences so having the connectivity and the low latency at the edge really brings in some of the compute power that you need to be able to drive things like augmented reality and virtual reality and those immersive experiences are really geared towards some of the entertainment commerce and industrial use cases and finally but not least there's definitely been an explosion in the industrial IoT and analytics and we're seeing a huge demand in both scale on the number of devices being connected and the bandwidth requirements for those particular solutions there's a vast number of devices that are collecting data and the need and the desire to monitor and manage all of that data so both from a feedback to be able to get real-time operations and interactions also to monitor the operations and health of those systems so I'd like to take you through a couple use cases more specifically and we'll do a little bit more of a deep dive on these so the robotics use case is definitely one that we're seeing in the manufacturing space and the key driver here is really on two things one is being able to bring connectivity to these devices so most of the existing ones are isolated or standalone control systems for the robots and not necessarily connected into an overall operations and management so the desire really is there toward bringing that in and being able to get the visibility into these systems they do have in some cases they do have existing infrastructure where they have hardwired connections to these systems and it's very costly for them so they have to if they want to reconfigure the robots into a different assembly or they have new components that they're bringing online that they need to monitor can be extremely costly so with the flexibility of introducing 5G and some of the wireless capabilities and secure capabilities for bringing the edge we do enable the capabilities for being able to reconfigure at a much lower cost so giving the flexibility and the agility to be able to reconfigure without needing to deploy a new physical cabling across those systems and further than that there's a vast amount of data that are being generated by these systems can be coordinated and managed at the edge where there's much lower latency to be able to do real-time analysis and operations across those systems so as you move more and more of the intelligence to the edge you're able to bring up the intelligence of those components and be able to monitor the health of those systems so really getting into more of predictive analytics to look for failures to analyze the operations for performance as well as being able to collaborate between some of the other systems within the overall control the control path to really looking at a more of a converged control plane where all of them are working together rather they're independently and moderated independently the next one use case I'd like to talk about is the smart buildings or the connected building so with the introduction of the smart technologies into these systems really we're seeing an expansion across all of the different components so both the energy management the communications security, the infrastructure management all of these needing to be managed and coordinated across the entire building and with the size of these systems they are generating vast amounts of data and to be able to provide the best experience for the residents or the the specific service that they're providing then they're really looking at being able to bring in some intelligence to that monitoring system and just the overall operations so again it has very similar requirements to some of the other use cases we're seeing for the intelligent edge where you have real-time analysis, you need low latency communication between the different building control systems and they need to bring in some artificial intelligence to be able to make real-time decisions about what's happening with the different components so if you making fine-tuning adjustments to some of the energy consumption you can save energy across the entire building being able to coordinate events during security situations and really looking to bring some additional awareness into the overall building and coordination across them so those are kind of the two key use cases that we're seeing an explosion of edge adoption and really the bringing forward the advancement of the 5G use case so what I'd like to do now is before we go into some of the feature details of what Starling X is most recently developed and what we have on our roadmap is to go over one of the key capabilities of Starling X which is the distributed cloud and this allows us to deploy the distributed control plane so I'd like to take you through that architecture of how we manage that across the distributed architecture and really how we manage that from an operations perspective so the distributed cloud today is comprised of a central system component and that central system component is typically deployed at a regional data center and subtending from that across the geo distributed architecture is the sub-clouds so we refer to these edge systems as managed sub-clouds and they're independent control planes and worker functions to be able to manage those workloads at any geographic location so specifically the communication path between that central control plane and the edge is a L3 routed path and that just allows us to have a larger gap between the central control plane and those distributed systems since each operates autonomously at the edge we're able to deploy those with having individual components communicating and managing the synchronization between the central system controller and the sub-clouds from an operations and management perspective and we don't necessarily need to be able to bring the data infrastructure across that for some of the management so it's not an extension of the control plane which means that we can tolerate longer outages disruptions for the network or even longer latencies so it really allows us to have some flexibility at the distance at which we can deploy those geographically dispersed systems and also the scale at which we can operate so if you look at the centralized components there's a number of things called out but most of it is around the orchestration and the lifecycle management of the entire distributed cloud so we want to be able to bring the ability to manage these systems since they are discrete systems across the different network components so we have the ability to do software management which includes software updates so that's a live software upgrade and live software updates across the entire distributed cloud and that is orchestrated across the entire system so both managing the local software within the system controllers but also the software across all of the remote sub-clouds so we're able to apply those patches for the software updates restart any services that are required and coordinate that activity across a multi-node function so if it's a standalone simplex system and typically they're deployed in a cluster of them to be able to continue to provide geographic supportability or if it's in a redundant pair or multi-components then we're able to coordinate the activity of applying those updates across all the multiple nodes to prevent outages so in the case of each of the sub-clouds we also have the ability to group them into different sub-texts to be able to have individual components and individual sub-clouds coordinated so not just at an individual sub-cloud level for each node but able to do groups across each of those systems so that means that you can coordinate to prevent outages if you have overlapping regions for some of the 5G network as an example then you can make sure that the upgrade path and the disruption to services for software updates will coordinate across those different systems so that you don't lose the wireless connectivity for those regions so in order to manage the system, if we go to the next slide I'd like to talk about the deployment so Starlinx implements the ability to deploy those remote systems using Redfish and specifically we use a bootstrap image that allows us to bring up the remote system from a powered off state so basically bare metal bringing it all the way up to running a fully managed Kubernetes platform or OpenStack platform to be able to run your workloads so from the starting point of initial media install which is the direct from a unified installation media or an ISO we're able to bring up the centralized system and then we push that same media out to all the individual sub-clouds through the virtual media from Redfish and this allows us to distribute the software without actually physically being on site and to also coordinate the update activities and bootstrap of those systems when they're being deployed so if you look at this particular architecture the initial bootstrap of the system is done with Ansible so we're able to deploy those systems first with the virtual media to install the initial software then we bootstrap the system using a remote playbook for Ansible to deploy the Kubernetes platform and then on top of that we deploy the workloads so if for example OpenStack is required we will deploy OpenStack as an application which would make it ready for deploying virtualized workloads so out of the box we get the fully distributed management and control plane for distributed cloud and that's all done through a common set of media for doing the initial installation further to that to support the containerized workloads we have central image registry so this is a container registry and that really allows us to do a full air gap installation so this means that we can do a deployment from that initial offline media distribute that out to all the sub-clouds and do a deployment across the entire distributed infrastructure without going back to a public internet site or something like that so that we can have a fully contained and secure environment and this is especially useful for both tier one operators as well as industrial use cases where you have critical infrastructure that you want to have a closed system go to the next slide I'll talk about how we manage this from an operations perspective so as you can imagine having each independent site brings some additional challenges so we achieve that by having a central single point of management across the distributed cloud so this is support of both the day one and day two operations so the central system controllers are responsible for the aggregation of both the state and alarms and events that are coming in from those remote systems so this allows us to look at the state of it across the individual sub-clouds but also as an aggregated view so that you can monitor the overall health of your system and this is the same interface where we do deployments for software updates and software upgrades and the overall orchestration for those so we have the ability to distribute the new software and new up software updates across the entire distributed cloud from that central location we also support the ability to drill down into an individual site so if you're needing to look at the individual health of a sub-cloud if you're debugging a particular issue or you just want to do operations that are localized to that particular system and we also support the ability to directly connect to that sub-cloud and manage it locally so let's say the overall distributed cloud architecture and management and now I'd like to hand it off to Ming1 to go through some of the set of the features for Starling X that we introduced in the release 5.0 so both Ming1 myself and Greg will present a few of these features and then we'll wrap it up with what's coming in Starling X 6.0 Ming1 Thank you Matt Hi, my name is Mingyuan Chi and I'm with Intel and also a TSC member of Starling X community I will introduce three new features of incoming 5.0 release the RookSafe 1 Rook is a storage operator set that managed multiple storage backends so RookSafe is one of its operator in 5.0 we introduce the RookSafe for containerized SAP solution as a user you can now install Starling X RookSafe application and deploy multiple SAP clusters with CRDs or SAP Demons are containers and orchestrated by Kubernetes if a user have an existing 4.0 system after 5.0 upgrade it's optional to migrate the SAP cluster from native SAP 1 to RookSafe we have a complete guide for helping people to do so but it's not mandatory you can still keep the SAP cluster running as it was as well if it's 5.0 refresh deployment both native SAP and RookSafe are supported they can be deployed together or using just one for RookSafe there's an advantage that the OSDs are not stick to controller nodes or storage nodes anymore the hard disks can be placed in any nodes with RookSafe the storage management in Starling X5 is more flexible so it's another choice to enable SAP cluster next slide please the Edge Worker feature it's an experimental feature in 5.0 that comes from our users from industrial domain the idea here is to add nodes which have Ubuntu OS stored these nodes may not meet the minimum hardware requirements of a standard worker node but they are still eligible to become a Starling X node the new personality is called Edge Worker the Edge Worker nodes will be displayed along with the other nodes on the same host list the Edge Worker node is not only a Starling X node but it's also a Kubernetes node which means the Starling X application is available for applying to the services to Edge Worker nodes including RookSafe the Edge Worker feature is now in Phase 1 next slide please FDO or FDO this feature is called Secure Device On Board it's a technology to provide a faster and more secure way to onboard any devices to cloud or on-premise platforms now FDO is adopted by a FIDO device on board FDO which is announced by FIDO Alliance so it may change the name to FDO let me introduce the FDO first when an IoT device is manufactured with FDO it gets an ownership voucher to identify itself if a user powers on it and connects to the internet it will connect to the Rendezvous server with its ownership voucher to get itself authenticated then the Rendezvous server will automatically direct the IoT device to the target device management platform for provisioning in the secure tunnel so in the 5.0 we integrated FDO Rendezvous service as an application this main component of FDO enables starting X the ability to authenticate the IoT devices and onboard them to any platform including starting X itself ok for the rest of the features I turn over to Greg alright thanks for the one yeah so the next feature I was going to talk about was the integration of Vault for secret management so this is the open source Vault project the main contributor there the starting X integration involves packaging the Vault into our system application format as well as handling the complexities of setting up the storage backend required for Vault managing encryption keys and managing sealing and unsealing operations all stuff that you have to deal with when configuring and installing Vault on Kubernetes so Vault actually provides a ton of different capabilities we've in starting X initially validated primarily as a secret manager but I imagine in future releases will validate it for other capabilities like for example Vault has a really good certificate authority that would be useful to be able to run on starting X clusters but as as a secret manager like I said we validated the capabilities there Vault does have a pretty extensive secret management capability providing encrypted storage pretty flexible policy based access control there's actually a variety of different secret types of secret storage engines depending on particular use cases and also a variety authentication methods although the default authentication method that we actually document in starting X is to use Kubernetes service accounts for authentication but the basic idea here is that to have Vault pre-integrated on starting X because for end user applications that currently use Vault which is a fairly popular service it enables much easier reporting to starting X for them. Alright next slide the next thing, another integration activity that we did in starting XR5 was to integrate the open source Porteris project that's a project that primarily contributed contributions from IBM but it is open source Porteris provides basically an admission controller to enforce image security policies such that any images that are getting pulled for Kubernetes pods basically can be checked against security policies to do things like only allow images pulled from trusted registries as well as if the registry that the image is being pulled from has a notary server you can actually also do image signature validation from the trust data that's in the notary server. One thing to note here is that as part of this integration activity of Porteris we actually upstreamed into the Porteris project itself a number of a couple of enhancements first one was we actually made it work generically for any Docker well for generic Docker registries it initially only worked for IBM Docker registries and then we also optionally added capability to support the certificates used for the admission controller as to be managed by server manager next slide so we added SNPV3 support in R5 we had previously we already currently supported SNPV2C and just as a reminder the SNP functionality for styling X is actually only focused on fault management data so any alarms that are raised by by styling X components as FM alarms or logs they get sent out as SNP traps and then SNP also provides access to styling X which is FMs active and historical alarm tables so the active being the set the actively set alarms where historically you can see the sequencing of sets and clears of alarms and logs so V3 basically introduced for improved authentication and privacy over V2C which is just a simple community strings and actually just a thing to note is that in the introduction of SNPV3 we actually used it as an opportunity to rework the integration that we had of our SNP solution which is based on the open source net SNP we reworked it to be a containerized solution we used to have it on the host and this is just a general direction the long-term direction that we want to go with Starling X is to containerize components of Starling X so moving it into a container made sense there and it also helped for configurability like you can configure just with simple hell chart overrides in the exact format that SNP uses you don't have to do any we didn't have to do any wrapping of configuration commands with Starling X REST APIs and that's it go to the next slide and it gets to Matt alright thanks Greg the next feature I'd like to go over is a new framework that was added to Starling X for allowing the applications to obtain the synchronization state or the PTP which is the precision time protocol state from the host so the onboard NICs that have the timing coming into them are synchronizing the clock to the host but in many cases the application needs to be able to have knowledge of what that state is and whether it's healthy so being able to make decisions around the health of the PTP clock synchronization and to be able to coordinate some of the activities across different nodes based on that as well so this framework introduces both a registration function so that applications can come up and register for specific events as well as a tracking function to be able to track the state against the different PTP clock sources so in this case the slave state and the slave clock source would be sent to the application based on the notification framework so to integrate with those containerized application a new reference sidecar is introduced to influence the notification API and some of the standardization around this is being done in the open ran community so this is a software community that's defining the specifications for integrating some of the infrastructure management for the ran use cases on the next feature I'd like to introduce from the 5.0 release was support for the NVIDIA operator and this was key for enabling a number of different use cases but specifically around being able to support augmented reality and digital image processing so the new operator allows for applications to be deployed and consume GPU resources and it controls the full life cycle so this is both the introduction and management of the NVIDIA drivers the introduction for the device plugin into Kubernetes the allocations to the pods and the overall run time to be able to support the interactions and the deployment of those services that are run included to be able to interact with these devices so NVIDIA is definitely one of the early adopters of this framework and introduced this so it looks like that we have a quick technical glitch so I wonder if we have Greg still to hope quickly over to 6.0 Yeah I can I think we just had one slide left and then open it up for questions and answers I think the last slide was really just to talk about what's next so we just finished the PTG meetings in Starling X and R6 and upcoming features the discussion item there you can see the timelines this would deliver end of the year and as far as features these are all to be confirmed but definitely the key items one of the key items is I think probably indicated previously Starling X is a top to bottom solution on dedicated physical servers so we provide the kernel and the recent changes with CentOS means that we need to move from CentOS to something else we pick Debian and that is going to be a lot of work for us and then the reigning key features we're going to do some up versioning of Kubernetes components we're going to continue scaling distributed cloud continue doing some simplification and auto renewal of certificate management and one thing I want to point out at the bottom is the the Armada subsystem that we used from Airship has been deprecated so we need to find a replacement for that we'll certainly be looking at what Airship chose but other than that those are some of the key features that are being thought of for Starling X or 6 great thank you all I do get a little bit of echo so if we can fix that that would be great but until that point thanks to all of our speakers today I think we just delivered an amazing package of content both about the edge computing group as well as starting acts what the platform can do and highlights of the latest key features for edge there's always a large focus on security and you could see that there are features also like the precision time protocol for more mission critical workloads as well so from the chat on the different platforms I could see that people really liked the highlights of the use cases and the session so far and I did see one question that we are also seeing on other channels as well and get it a lot which is is Starling X running in production so to answer that I would like to give the floor to Matt thanks so the answer simply is yes it is in production we have key deployments with Verizon for their 5G infrastructure as well as T systems that are both using Starling X and a commercialized version of Starling X you should say through Wind River and being able to provide for their 5G V brand use cases as well as the industrial use cases amazing I'm really excited to hear this and it kind of also shows how the telecommunication sector is the pioneer in edge computing as they are the ones who are providing that connectivity so making it possible to really take the cloud out of the data centers out to the edge and really provide a compute and storage resources for critical workloads there and new types of workloads so it is really exciting to hear that Starling X is running at those two large operators in America and Europe and I'm excited to see where else the platform will be deployed and with that we are at the top of the hour so I sadly have to say that this was all that we could fit into this hour for you but not to worry because we will be back with the next live episode next Thursday remember 1400 UTC so those of you who joined today you successfully solved the little challenge of what time is it in my time zone so please encourage your friends and colleagues as well to solve the exercise and join us next week so what will we be talking about next week's episode will actually be part of a series within the series called Large Scale OpenStack and it is brought to you by the Large Scale SIG or Special Interest Group within the OpenStack community and this group is looking into challenges and solutions and techniques in terms of how to run an open source cloud platform on a massive scale so we are talking about hundreds of thousands and millions of cores of production powered by OpenStack so next week's episode will be about upgrades and even if you are not running OpenStack I'm pretty sure that the headaches of upgrades has already been part of your life with OpenStack with another tool or project so I would really encourage you to join next week because we have speakers and a couple of operators to share their techniques and also create a lively discussion so to mention examples we will have Blizzard Entertainment Bloomberg, OVH Wexhost, Workday and CERN because we cannot have a session like this without CERN so it is really exciting remember Thursday 1400 UTC and if you would not be able to join you can always check out the opening on our YouTube channel after and one other reminder OVH to actually try to join live so you can participate in the session and the discussion itself and the other one is that if you have some topics for opening for live in general then please submit your ideas on ideas.openinfra.live and with that that was all that we could bring to you today from all over the globe we have speakers from North America, Europe and China today so really it is a global event and I'm hoping to see you next week back here thank you