 Good evening, everyone, and today we want to talk about Acryno Edge Stack and requirements to the OpenStack community. So thanks for coming in, and I'm going to talk about Acryno Edge Stack and the progress it has been made in the community, then these gentlemen will talk about Blueprints which are going to be supported in the Acryno Edge Stack. So when we talk about edge computing, every community is doing something with respect to the edge computing, and with respect to that, what is Acryno is really trying to do with respect to edge computing, that's what this slide is going to talk about. And when we look at the edge computing, there is many layers to it. There is a platform where the application would run onto it, then there is an API layer need to be supported, then there isn't a top-like application need to be supported. When we collectively see all this thing, that's what we call as a use case, which actually drives some specific business case. And Acryno is actually like taking a lot of car parts and compiling everything into a full working solution that can satisfy the use case. That is the key focus of the Acryno Edge Stack community. And when we look at all the open sources, and they are working on some part of the edge computing in the stack that we talked about, but some community has to put together everything together so that it is useful from an end-to-end stack for a user, that is exactly what this community is focused on. So it takes like a lot of multiple open source software together, that including open stack, Kubernetes, and other components, that including the hardware that need to be supported at the edge, because when we look at the hardware, there is a many type of hardware really need to be supported depending upon which edge location, because edge could be in a cell tower, edge could be in a customer location like in a home, or edge could be in a central office in a telco office. So depending upon where it is going to be deployed, the profile for that particular location will be different. So that is why we need a way to actually integrate everything together and test the solution from end-to-end integration perspective so that we can actually support that in production. So this community will focus on that integration part of everything together to drive a specific use case, and it will develop a CI and CD infrastructure for it so that the production readiness of that end-to-end integration can be tested. This will also focus on the features, and when we talk about the open sources, like it will bring all the open sources together actually in terms of bringing this end-to-end stack, but if there are functionalities which is missing in that specific open source component, this community will work with that specific community in addressing the need, but if there is a component like a need to be addressed in the end-to-end level, and this community will develop that and support it as well, and you can see that the member of companies like this is a list of companies so far have joined the Karina community, and there's going to be more people actually in the pipeline and joining, and this brings the user who's going to deploy this actually in the cloud data center or in any edge location, and also it brings the people who's going to support this stack as from a vendor perspective. So in the combination of all this thing is what this community is working on and delivering so that it is useful edge solution can be deployed at production, and this community is focusing on two key use cases at this moment, and you can actually see this use case as a umbrella of use cases, and it is focused on the carrier use cases. When I say carrier, you know, like you can think of a telecom, and the telecom has multiple use cases in order to support the edge computing, and this community will focus on all those use cases, either it is a deployment in a central location, or it's a customer location, or in a cell tower, so it will focus on all those requirements around it, and it will also focus on the enterprise and industry IoT use cases, because it's very key that, you know, the requirements are a little different when we look at the telecom versus like enterprise IoT use cases, there is some commonality, but there's also differences need to be addressed in. So this community will actually focus on the sub use cases in this major two categories. So this community uses a terminology called a crino blueprint, and it is very important to understand what is this blueprint, so you can think of like a blueprint that is like an assemble the car, so the car parts are like a different, different open source components, so the blueprint brings everything together in such a way that, you know, like it is a useful end to end solution which can be deployed in a production environment. So this is a real use case driven, and it is a fully integrated CI CD testing will be provided by the community, and the community will also look at the lifecycle support of all this component, because, you know, each community, open source community, will focus on the specific component that they are focused on. For example, if we take Kubernetes, OpenStack, an operating system, they focus on the lifecycle of that specific component, but this community will look at in the holistic picture for that use case, you know, like how do I support a whole lifecycle support for all the components, and provide that CI CD infrastructure so that the whole component can be deployed in the production. And this community is very complementary to a lot of open source which are currently there, so that's why, you know, like people ask, like, okay, every community is doing edge computing work. It is really complementary, because it is assembling all those factors together, and each community is supporting edge computing requirements within. It's also very important. For example, OpenStack supporting it, Airship supporting it, and Starling X supporting it. Each community supporting the edge use cases within, it is also very important. So this community is very complementary to the work which has actually been done in other communities, so it is not trying to duplicate the work, it is very complementary to the other communities. So this community come together and put together actually a structure in which this projects are going to be assembled and supported with respect to the kernel. As I stated, you know, like the blueprint is very key for this community, the assembled edge stack, that is what this community is focused on. In order to do that, this community articulated, you know, how this is all coming together. So there's going to be a set of future projects that this community will focus on. So I'll give you an example. For example, a blueprint need to be end-to-end tested. A blueprint need to be, you know, like end-to-end lifecycle managed. So those are like a future project that is not addressed in other communities, that is something this community will address. And the integration, what we call the blueprint, which actually brings everything together to support a particular use case, it will also have a validation project. So that takes the blueprint and run it on the exact hardware that the user would run it in production, so that the declarative configuration of that particular blueprint can be validated. And as I stated, this is very complementary to the other open source that including, you know, OpenStack and other projects. And so we will have, like, people from this community, engaged in other communities, so that, you know, like, there is a mutual collaboration going on. And likewise, we expect the other communities to collaborate with Acrena as well, so that there is a very cohesively work that comes together to support the edge use cases. And so the way we actually address, like different requirements with respect to the use cases, because the use cases can have sub-use cases. And so we collectively bring together in a form that we call as the blueprint family. So you can think of, like, a blueprint family is like, you know, collections of blueprint, they all come together. And they share the same principle, tools, and it could actually use the same CI infrastructure. So that's what we call as a family. And within the family, you can have, like, multiple blueprints. So for example, if I take a telco use case, and I want to deploy a network cloud blueprint in a telco, and that particular blueprint can support, like, 5G core, it can support, like, RAN workloads, and other workloads as well, like a voice-related stuff. So a blueprint could actually support multiple type of workload and multiple type of use cases as well. But we actually have a way to actually support multiple innovation by having, like, use cases that can be supported using multiple blueprints as well. So this community is really, you know, like, bringing all the innovation together. The one key aspect that need to be noted compared to the other communities is that the architecture for the blueprint is actually very flexible in a way that it satisfies the use case. So we define the solution for the use case. Instead of, you know, like, the solution is defined, and then they are defined, you know, like, change the use case, because most of the communities, they come bring the solution together. Then they will say that, okay, the use case is going to go look like that. But this community is really, you know, like, focusing on the use case itself so that the business needs can be achieved. So AT&T contributed the seed code, one of the seed code for this community, and that seed code is available in the Karina website. If you go to the Karina website, you can see the seed code. We call that as a network cloud blueprint. Again, this is one of the blueprint that Karina will support, but there's other blueprints currently in development and currently being reviewed by the TSE that also will be supported. And within this blueprint, again, it is a telecom focused use case and to support the 5G core and the voice-related application. And in this blueprint, you know, like, you can have like a single server install, multiple server install, and this blueprint is based on the airship. I'll talk about that in the next slide. And you guys can see this slide. This picture is also available in the Karina website. And we pulled together actually this entire stack using airship. And as you all know that airship, we talked about that in the morning keynote as well. It is an open stack foundation project. And it is used to actually deploy the the components of a cloud using a declarative way. And in this stack, you can see that a Karina brings like additional components as well to actually drive that into an integration that including bringing like one app as a VNF orchestration and additional other components for operational management. So that is the benefit of a Karina blueprint is that it brings everything together to drive a specific deployment in. So this is additional blueprints that we are actually working on. And from AT&T perspective, as I stated, there are many blueprint proposal. I would encourage you guys to go and look at the Karina website where you can see all the different blueprints being proposed right now. And we are working on a SIVA blueprint, which is a software SDN enable the broadband access. And this blueprint is a collaboration with the Open Networking Foundation. And this blueprint is to support the the G-PON access related workloads. And we are also working on serverless and real-time blueprint. We also like to have a third-party blueprint where like AWS Azure that get deployed into a location and have interoperability between the blueprints that a Karina was defining. So we'd also like to bring those innovation into the Karina community. We're also looking at, you know, customer-promised forage deployment as well from a network cloud use case. And these are the sample of like other use cases and blueprints that this community is working on. But the gentlemen's here, they want to talk like more about other blueprints that they are actually working on. So with that, Tapio? My name is Tapio and I'm also a member of the Acryno TSC. So the question was sort of why does Nokia participate in Acryno? So I formulated this a little bit different. Like why do we want to, why you should participate in Acryno also. So the first benefit is that inside of a project like Acryno opens this project you can collaborate with other companies. And it's also, I mean, which is obvious, other member companies, but it also allows you to collaborate with upstream projects. And we actually have, from previous projects, we have some experiences about that, that it's quite helpful if you have a sort of a group of people who have like a similar use case or same use case. They work together and sort of figure out what are the requirements. And then once they sort of internally know what they want, then they will sort of go and talk to the upstream community and say that this is actually what we want. These are the requirements that this is how we would like to do it. And then we have our own cloud offering, real-time, and sort of IT type of cloud offerings. And sort of collaborating with other companies, it's also, of course, helps with sort of improve those products. And there's an example is that the blueprint that we are going to present with this airship that we take this sort of airship-based network cloud blueprint and we add to it and sort of, and that's something that sort of can also benefit in sort of our products. And then, of course, allows to sort of work with other sort of related blueprints with other community members. And then sort of we have another sort of sort of interest is to sort of we are launching this inside of the open compute platform. We have to open edge hardware and we want to sort of test that and run it as part of the inside of the cryo blueprint. And then also to sort of create joint blueprints with other community members. And a kind of blueprints that we actually planning at the moment that we will that will be discussed in the TSC is what we call the open edge cloud. It's a real-time cloud. It's based on the network cloud blueprint. So it's a Kubernetes-based. It's running OpenStack on top of Kubernetes. And what we want to add to that is some real-time functionalities. And also to be able to run that on the OCP open edge hardware. And then another one that we are talking, collaborating with ARM, is what we call the micro-mic or multi-axis edge computing. It's the use cases. It's a smart city. It's a platform for running applications on the ultra-far edge. And this is sort of completely different than the other extreme. It's a single board ARM server with possibly some kind of accelerators for machine learning or things like that. And the idea in that is that it's an application platform which is container-based and it will be open for third-party developers to develop. To develop their own applications. Okay. So, my name is Martin Beckstrom. I'm working with cloud and virtualization technologies within Ericsson. And I would like to start with just explaining a little bit on how we see on edge computing just in general. For us, the edge is basically two things. One is that we have a distribution of workloads that can happen in accordance with policies on, for instance, low latency. There could be other policies. And that doesn't necessarily mean that the data center is particularly small. That's a little bit another problem, of course, associated with edge computing. So, one other problem is the distribution and to basically get a correct allocation or workload in relation to where the data is consumed. That is one thing that we're working with. The other thing is then how we should be able to build and maintain data centers that are very small. With small here, we mean 10 compute blade or less. And for this, we believe that our ship is a very good solution. The reason is that the possibilities to be without the open shift, to be without hypervisor is small for many applications that needs to be distributed, including our own applications such as virtualized run and parts of the packet core. And open stack is not really scaling in small deployments. With the help of our ship, we can basically then slice up the compute blade or the Linux plate in small portions and therefore deploy open stack in very, with a very little overhead and therefore make sites that even have 10 compute blade or less economical viable. We are therefore very interested in the Acrino development and I'm sitting in the Acrino board for Ericsson. And out of this, meaning our interest for orchestration as one key area, we have basically said that we are interested very much in the different portions of Acrino that either is helping us to take care of the small scale problem or very much on the orchestration and the distribution of the software so we get the right allocation of software and workloads out in the network. And to this, we have been able to so far actually change a bit of product plans to take care of our ship and also do lab and prototyping and we intend to be active in the upstreaming of both our ship and other ports of Acrino. Thank you very much. My name is Sudev Kapoor. I'm a distinguished engineer at Ginnepur Networks. I am a TSE member for Acrino as well as for tungsten fabric. So here I'm here to talk about the integration of tungsten fabric, which is also a Linux foundation-based product to bring tungsten fabric and Acrino together as a joint integration. So we are also working with the same foundation, which is a network cloud blueprint. And what we are looking at is how can we integrate tungsten fabric into it. So our goal is to replace it somehow. There is an issue with the animation, but this box should span few more boxes. But regardless, what we are doing is we're taking, so like the way Ericsson gentlemen mentioned, so for the tungsten fabric, what we are going to do is similarly replace certain components in the blueprint and bring tungsten fabric into it. And this is what Kadhan mentioned earlier also, that's what Acrino Blueprints are all about, that you can bring in different orchestration systems, you can bring in different technologies within the same blueprint family. So that's basically what we are trying to achieve and how we do is, our model is that we have original controller node and then we have edge site. And within the network cloud blueprint, we are using Airship Armada to instantiate tungsten fabric helm charts. So tungsten fabric itself today can be deployed by using helm charts and we are just integrating these pieces together so that Airship, instead of installing other components which are part of NC Blueprint, it can install tungsten fabric. In fact, we already have basic demos working with this and what that does is tungsten fabric is replacing Calico as a CNI into the Kubernetes as well as it's replacing, it's bringing in as a networking layer for OpenStack. So therefore, what it does is it gives a single SDN solution within a Crano where it can work with both orchestrators, Kubernetes as it's a CNI for its CNI as well as OpenStack Neutron for its networking. So thereby, we can orchestrate the parts, virtual machines. One thing which I don't show here is tungsten fabric can also orchestrate bare metal servers. So therefore, in the edge sites, edge pops, we can orchestrate all kind of workloads and seamlessly bring them all together. One of the biggest benefit of this integration is the middle box where it talks about the networking features. So that's one of the biggest strength of tungsten fabric is that it provides very, very feature rich advanced networking. And what this integration does for the users, the customers, the providers is they get to take benefit of all of these features as a part of the Crano integration. So that's the biggest benefit of this integration. And overall, the edge looks like you have an orchestration and monitoring system where you have full monitoring capability of all remote sites and it's fully secure end to end. We can have full IPsec level security between the sites. The site can be very simple, very thin edge site or it can be a very elaborate remote site. So the smallest footprint we can support is single core 8 gig memory requirement. That's it. That's the smallest footprint and the largest you can have a full blown racks of hardware and we can orchestrate all of that through this integration. So that's what really the tungsten fabric and the Crano integration brings. With that, I'm going to hand it over to Jim. Hello, my name is Jim Unerson. I work at Wind River and I fully realize that I'm probably standing in between you and some refreshing beers. So I respect that. But be in brief and I appreciate you hanging in there. I'm here to talk about Starling X. First thing I want to start with is what is the relationship between Starling X and a Crano because that question seems to come up especially today for some reason. So the first thing is Starling X is a hardened cloud infrastructure fully integrated and it is indeed deployable standalone. Some companies will choose to do that to productize and deploy its standalone. I'll tell you without a shadow of a doubt our day one intention and I was there on day one when we created Starling X was it would be a fundamental component that we would contribute into a Crano and we would do that to address certain blueprints that we felt are essential to the Crano mission statement and ones that Starling X is optimized to address. So I see there's no conflict at all in between Starling X and a Crano. I see Starling X as an essential component of a Crano. So why did we want to do that? Well, we want to contribute to a strong ecosystem so that we can collaborate. You've heard the other colleagues of mine here talk about building this strong ecosystem addressing the critical use cases for the telecom and industry and that's exactly where we want to be. We want to drive towards standardized APIs, standardized models, standardized requirements and Starling X we cannot do that on our own. We need a broader community with broader inputs to it. So the outputs will be critical blueprints that we will define that integrate Starling X into the Crano lineup and address these critical use cases. So the blueprint that we want to lead off with that we're working on right now is we call it a far-edge distributed cloud blueprint. And we see this is very essential for addressing use cases of geographically dispersed high-density locations be it in factories or stadiums or VRAM applications. These are characterized by scalable footprint but going down to very small physical footprint and the assumption has to be low physical security constraints. So we proposed this blueprint using Starling X where we would have a distributed far-edge cloud. It's essential to have a central presence because for operational efficiency you want to be able to control a patch, manage, monitor all sites centrally but you need actually local control as well because you need to be providing a level of autonomy in case there's communication breakdowns or loss of connectivity these kind of things. You don't want the services to be disrupted you want local autonomy and that's exactly what we see as the essence of this distributed far-edge cloud blueprint. So the edge sites themselves they'll be scalable from a small as one single server up to hundreds or thousands of servers and all connected together into a central agency to do the monitoring, management, and deployments according to again the acreno requirements. In the local sites we provided a lightweight control which has essential local functions which will interoperate with the central cloud control functions but also by the autonomy that you need so that you can survive these kinds of outages. The last thing I'm going to show you is our common software stack. I'm not going to go through this thing because it's been shown a couple of times already today but it's certainly on the acreno site if you want to go and look through the blueprint proposal you'll find it there. But we're aligning with the acreno principles that Candon has already laid down that zero-touch provisioning containers and the AMS both supported and for us it's a small infrastructure footprint for this distributed cloud edge sites. Thank you very much. Thank you guys. We still have a few slides. So what is up to the acreno and open stack community collaboration really need to happen? And as I mentioned that acreno community is very complementary to what has been going on from an edge computing effort in open stack as well as the open stack hosted project like Airship, Starling X and thank you to the open stack edge community working group and they have been working on enhancing the open stack to support some of the edge use cases as well. As I stated it's one of the components within the blueprint currently being used in the acreno and that work should continue and acreno community will bring additional requirements to the edge working group and as I stated it is complementary because there's a lot of use case discussion currently going on in the acreno community that will be fed back into the edge working group within the open stack community to collaborate and continue the collaboration between the open stack community and the Linux foundation acreno community is very critical in order to success in the edge computing and overall. So that's why we call this community as a complementary work and we have like two, three minutes and if anybody has any question for this gentleman or for me actually we can take it up. Anybody has any question? Right then I think it's a late, late evening. I, we all understand that. Thank you. Thanks for coming in. Appreciate it.