 Good morning, ladies and gentlemen. Welcome to this presentation about the magic one called own app. I'm Catherine Lefe from AT&T. This is Kandan Kaderweil from AT&T. So the Open Network Automation Platform is an open project that has been released under the Linux Foundation since March, since April 5th. It was open for the early adopter during the Mobile World Congress. This own app is in fact the harmonization of two frameworks for real time policy driven for software automation of VNF that enables developer network providers, cloud providers to create easily and faster new services. So it's a combination of the ECOM platform and the OpenO. The ECOM platform was developed by AT&T over the last three years. It's in production. It's provided a comprehensive suite of application focusing on design orchestration, policy control, and analytics capabilities. It's a mobile-based design that enables self-service capabilities to enable instantiation and closed-loop automation. The OpenO is another open source project which was released last year under the Linux Foundation umbrella, bringing open-to-scam model, advanced open-to-process tool change, and also it's facilitated the onboarding of the VNF through SDK. As you can see, the members are mainly cloud providers, carriers, and integrators, but more to come. If you look at the service provider stack, ONAP is playing at the orchestration management policy services and control played software layers. It's composed currently of eight plus three projects. The eight first projects are representing the different applications that we are currently developing and announcing, while the three last project is more about the VNF SDK, the documentation, the modelings, and the virtual function controller. Let's have a look at these famous eight components. So you can have access to the ONAP platforms through the ONAP portal, and you will have some design functions. The service design creation application provides you a visual and modeling design tools where you have four levels of assets. You have the resources, and by resource you have three types of categories. You have the infrastructure resource, like for example the cloud or the storage. Then you have also the network, this is where you will have the VNF, other resource, and finally the third resource is the application, where you have functions like load balancing. The second level of asset is the services, which is in fact a combination of different services, different resources together. Then we have the third level of asset, which is the product. It's different services, binding together, providing commercial distribution with ordering and billing. And the last one in terms of asset is the offer. It's a bundle of products with marketing dimension. Then we have the policy creation framework. This is where we are maintaining, distributing, and operating a set of rules and policy that underlight the ONAP component from control, orchestration, and management function. Another design function is offered through the analytics application design framework, and this one is connected directly with the DCA platform, standing for data collection, analytics, and even. This is the place where we are gathering information from the ONAP component, but also from the different VNF that we have onboarded on the platform. As information we can have performance, usage, configuration, data, and you can therefore create some analytics through the ONAP portal. DCA is also important because it's the first element of the closed loop system. So we have some virtual even streaming from the different component, and we have different collector which will try to identify based on different policies and rules if we have reached any type of threshold as an example. And if it is the case, then some action will be automatically be done like spin up VM if we need, if we are lack of capacity on the load balancer or change the configuration of a VNF itself. Let's move to the upper component, the active and available inventory. So it gives you a real time view of the resource and the services that you have created through the service design creation, but also any kind of services and resource which are running on the ONAP platform. It demonstrates the relationship between this different component in the network infrastructure, having a network topology, it's perform actively some audit to be sure that the view is constantly accurate. So you have different streaming source to have a logical and physical view of your resource and your services. If you look at the service orchestrator, this is somehow the heart of the platform. This is where we arrange, we sequence, we implement the different task based on the rules and the policies to create, to consolidate, to remove any logical or physical resource. The controllers, we have two types of controllers which are based on the open daily framework. We have the network controllers. This is the controller that will instantiate a new VNF as an example and they will report the new status of the new VNF through their channel to the ANAI again to have this accurate view. And finally, we have the application controller which are more related to the life cycle of the VNF. For example, we start, we stop the VNF, but we have also monitor and repairing function. Based on the ill-check results, there will be an action on the VNF. So when we look at the business value that the ONA platform can bring, I think, and the combination of the two framework, OpenO and Ecom, it will bring a lot of good value for the business, especially with the emerging technology coming from the 5G where we can offer residential and enterprise even cloud solution based on that with the automation of the network and the acceleration of the service creation as well. The fact that we are combining all the technology together and also defining some VNF guidelines based on netcom, the e-templates, young model, TOSCA model, you have a kind of uniformization of the requirement, helping the VNF vendors, the integrators to test their solution, to certify their solution through this ONA platform. And also you can see it as a proof of concept environment if you want to try new things related to software defined networks. So now I will give the floor to my colleague who will speak about the AIC. So we want to give you guys the context of where we use ONAB and Katherine talked about what is the use of the ONAB and she also going to show a demo how we actually, ONAB can be integrated and used along with OpenStack. So to give a context about AT&T integrated cloud, this has been deployed in AT plus location and this is growing both in the size and also the number of location. We host the carrier grade workloads. So what do we mean by carrier grade workloads? So the network function virtualization which is actually hosting all the telco application for the AT&T is hosted on the AT&T integrated cloud and it is a complex deployment. When we say complex deployment, there is a lot of VNF being hosted and ONAB provides the way to actually orchestrate on multiple location. This is very key because when we talk about in a small instance of OpenStack within a data center or within a lab, it is a very simple configuration. But when we talk about the scale in a large scale like AT plus location and with the event of all the discussion going with respect to the edge, we're planning to extend this to a large number of location. So in that context, it does require OpenStack does require something above the layer. So that's where ONAB is integrated with the AT&T cloud. It provides that like the complex configuration needed by the VNF in a multi layer, meaning that one is deploying it, then configuring the VNF with all the configuration needed for the VNF. So those concepts is actually like with the power of OpenStack and the ONAB is being integrated in the AT&T integrated cloud. So this is from starting from a small size, the medium and edge, it could be a very small size. It could be a couple of compute or it could be a 500 plus compute in a data center. And this provides this complex installation, but managing all the VNF. So it's really a multi-tenant VNF even though we talk about just AT&T. And this applies to all the telco providers. And even within a company, even within a single application, multi-tenancy is needed. Why multi-tenancy is needed? Because people, there's going to be an operational limitation, meaning that there may be a separate operation managing the workloads and there may be a separate application team managing it. So even within a company, you need a multi-tenancy to make sure that only people need access to that VNF and VM, they have an access to it. So this provides the context around how a large scale can be deployed along with the ONAB. We're going to see the more detail in the demo and also in this additional slide. This also need to be very highly available. Why need to be highly available? The critical application, like whatever you guys are using from a cell phone from AT&T or any other telco providers, the application which is hosted in the telco has to be highly available because it need to support critical applications like 911 calls. So it does need to have a high availability, not on a single layer, starting from the hardware, the software, and the Uber layer like ONAB. So the overall structure of that, you know, like components has to be highly available. So this slide is to talk about how does this OpenStack integrate with the ONAB? Where is this relationship coming coming across? So OpenStack is deployed in each location. So we know that OpenStack does not provide a federation across multiple locations today. And in this case, you know, like when we deploy in like a 100 plus location, so the federation is really needed. Why do we need a federation? For example, if a site fails and then we need to move the workload or we need to have a way to enable the workload in another location because we do need a higher availability. So when our location goes down or, you know, anything to do with the VNF itself, even if a single rack goes down or a single server goes down, they need to be aware to actually not transform the application and make it work from another location or another server or another rack. So this need actually a way to have a policy-based orchestration or a policy-based enablement. So this is where the ONAB comes in the top layer. It provides that five key functionality what Catherine was talked about. So the design, so you have a design studio where you go in and you say that, okay, I need this particular VNF and it connects to this load balancer and this connects to this firewall. I want to service chain them across it. Then also I want to say that, okay, I want to deploy this in this specific rack and this specific server in this particular data center. Then it goes and checks that particular data center has enough capacity to deploy that particular compute. And also, you know, like I can define as the application owner, I can say that, you know, like, okay, I need another region this need to be deployed with. So it can actually detect whether I have capacity in another region, wherever the region what I'm asking for, then it can also can put the workload in there. So the way it does is that, you know, like once the design is done, then it takes on to the orchestration layer, then, you know, it looks at the policy and then it compiles the heat template, which is actually the heat is the engine in the open stack that takes that particular heat template. And then, you know, like it goes and creates that workload. The way it does is like, that's why we call as a magic. It does actually sense to multiple location by taking the concept of the design, what the application owner really needs it, then it applies the policy to because there is a lot of corporate policy involved with it, right? So the application team who's want to deploy it, they have to obey with all the policies. For example, you know, if you take a company, you know, any, this applies to any enterprise or any company is that there may be a security policy. This particular VNF cannot be deployed in the European Union, you know, country. Or, you know, it could be that, you know, like the workload cannot be transferred to US. So there are a lot of, like, laws involved, especially when we try to deploy in most of the world, along with, you know, like, along with the US and other countries. So those are the policies need to be taken care. And there are different teams and different, you know, like, people in the company actually define that policy. And those policies are also been considered when the heat template has been created. Then it's sent to the AIC and it's sent to the open stack. Then it creates that workload, what is needed to make this whole application work. So one of the aspects that there is a lot of discussion about, you know, bringing the other open source community incorporated and having a relationship between open stack community. Why do we need that is really that, you know, like a more collaboration among the open source community does provide that, you know, the interoperability between the application because this is very, very critical. You know, people talked about Kubernetes, people talk about Docker. There are so many open source community. So OneApp is another open source community. What it does provides is that, you know, like especially telco company, but it also applies to the other enterprises because this is not just a telco application. So anyone who need way to orchestrate a large scale deployment and or even a small scale deployment but want to use that five thing, you know, like what we talked about, they can use this particular OneApp along with the open stack. And this is a open source, purely open source and there is a huge community behind it. So we like to have the open stack community and the OneApp community talking together and we like to create an integration project. This is basically to demonstrate that, you know, like OneApp and OpenStack can work together. We have demonstrated in our environment and OneApp community is, you know, Catherine is going to show a demo, but we also like to have like a large community of people who are participating in the OpenStack to collaborate on the OneApp and make use of that, you know, the good work currently going on in the OneApps. So this is what, you know, like mostly the discussion wise, we are working with the many people within our OpenStack community to get that going in terms of making this two community coming together and make this solution, you know, work for everyone. Thank you, Kenda. So what do you need to run OneApp and all the demo? So first of all, you need to join our community, then you will have an access to our Gerrit source code repository where you will find our heat templates, the YAKML files, and also the environment files. For the demonstration, so far we have certified on Hackspace, but I know other developer communities trying other cloud infrastructure. You get your account on Hackspace, you set up your credential, and then you start the heat stack on Hackspace. More or less it takes 20 minutes to deploy all the components except the DCE, which is the one related to data collection even in alarms, and it takes more time due to the storage. And then you can enjoy OneApp. So let's have a look together, imagine that you have your heat templates and you have also your account on Hackspace. So what you need to do, is really create your OpenStack called OwnAppStack, and you will see that all the VMs, so here is where we create the OwnAppStack, and then all the VMs will be instantiated automatically. We have about, I think, 33 Docker containers, and it needs 14 VMs to deploy the complete platform. And different states will appear. You start with Ready to Deploy, which is gray color, then you move to Configuring, which is orange color. And finally, when it's complete on this VM, it moves to green, so that very easy, two files, you create your, you kick off your stack, and then you have your OwnApp ready to go. Just waiting. Yeah, it's coming green. Here we go. When your OwnApp platform is deployed, it's not only the source code that we have delivered, it's not only the heat template to spin up your OwnApp platform, we are also delivering two use cases just to demonstrate the capability of the platform. The first use case is the Vfar wall, and how does it work? You will have a traffic generator who will send some package on the Vfar walls. Here you will see there is a Vest client. The Vest client is virtual, even streaming, which will exchange information with our DCA platform, or AlarmEven Analytics platform. Different collectors will be put in place, and the collector will compare the data with the policy rules which have been created in the policy engine. At one point, we will detect that the threshold has been reached, and therefore we will send an instruction to the application control to say, hey, hold on, we are getting too much traffic, we need to reduce the traffic on the traffic generator. That's the first demo. So again, quick video about that. So we have our traffic generation sending packets to our Vfar walls. We have our Vest here, element who are sending information to the OMAP platform. We are aggregating all the information under one collector, which is comparing with the policy rules. At one point, we will have a message telling us that we have reached the threshold, and the application controller will send a notification to the traffic generator to reduce the traffic. And you will see the traffic will be slowed down, back to the normal, as you can see on this graph. The opposite can happen, the traffic has slowed down that much, that the Vfar wall could handle additional traffic. Again, our Vest client is sending the information to the OMAP platform, and the OMAP platform is sending messages through the application controller telling the traffic generator, you can have more traffic on it. And we are back to the normal. The second demo is also very interesting. Because instead of changing the configuration of one VNF, or traffic generator, here what we try to illustrate, again, we send traffic on the load balancer. And at one point, the collector who are comparing the rules and the policy from the policy engine will notice that for the VDNF, which is set up here, it's really too much traffic. So when the threshold is reached, we will collect the VNF information, the VDNF information from the ANAI. Remember, this is the active available inventory. And the MSO, the service orchestrator, will initiate, will spin up a new VN with a new VNF in order to load balance the traffic across the two VNF. So quick demo about that. So traffic generator is sending a lot of information to the V load balancer. The collector is aggregating the data, checking with the policy rules which are in place. In few minutes, I think we will reach the threshold. Yep, we reach the threshold. Therefore, we will look at ANAI to have all the information related to the VDNF services that was created through the service design creation application. And MSO, the service orchestrator, will automatically scale up. You can have the same behavior to scale down, of course. So that's the two demo use case that you can practice with the on-app platform. I think we have still a couple of time because what I would like to do with you, so everything is really real. It's not only demoware. I'm already in the on-app portal. And I would like to share with you that we have lost the connection with the screen. So I'm not sure. I need to switch on. So let's see. Is it back? No. Yeah, it is back. It is back. So I'm currently inside the on-app portal. Remember, this is where we have the different design function, the service design creation tools, the policy creation framework, and the opportunity to create some analytics. So I was discussing with you about some policy, checking the threshold based on the collector that are aggregating value at the DCL level. Here I infect the different rules that has been created in the context of this demo. We have the BRMS rule, which is used to run time policy. So you have a description, some information about that. You can see that here, I maybe didn't say, but we have two rule engines, one exact same for stateless transaction. The other one is based on rules for stateful transaction in the context of this particular policy. It's rules, rules, and it's about the VIFAR rules. So we have a set of information that we need and parameters in order to run time this policy. If you go to the edit, it's possible also to view the policy. Oh, sorry, that's what not happened before. But that's the purpose of the demo. So you can change the different variables, but you have also an access to the code, and I think I've maybe lost the connection in the meanwhile. Anyway, the rules is also, you have access to the source code of the rules where you can change some additional information as well. So let's try to see if I can go and say, this is the two rules, the VIFAR rule, the load balancer that we have implemented for the demo. Let me change my profiles to see if the connectivity is better and reconnect to avoid this type of issue. I hope so. So I'm connecting again to the own portal. And now I'm going to this famous SDC, the service design and creation application, where you can have a different type of resource. You can have a VNF where you will have different category of VNFs. It's all the application level fourth. It's also information about the DC, this important platform for collecting the event and the alarms. You can have different network connectivity and you can also cover level two and level three. You can also have some services. Currently it's limited to this application, but we can announce the catalog. And then we have a kind of catalog after you have created all these resources, these services. You can filter based on the resource. All the VF is virtual function. All the VFC is the virtual function component, like a load balancer. We have control point. We have virtual link. And the DNS and V load balancer are in fact two services that we have defined here. You can see there are different status when you create the environment. Of course, you can have some resource, some services product, which are still under design. So it means they are not yet deployed on the platform itself. Ready for testing. There are some certifications that we can have when we create this environment. And when it is moved to the distribute status, it means it's really active, like we have seen in the demo. If we have still a few minutes, I was discussing with you about the heat template after you have registered to the on app community. We have not only the source code, but we are also offering a continuous integration tool change like Jenkins jobs to generate automatically your build, but also generate artifacts which are posted on Nexus. And then we have another Nexus which have all the Docker container as well. We have Sonar as part of the tool change to monitor the quality of our source code and to have an idea about the test coverage. So where you can find the heat template to set up your own app platform, you just go to Gerrit, right? Gerrit.onapp.org. And you can imagine it Jenkins.onapp, Sonar.onapp. You go to the list of projects and then you scroll down until you find a project called demo, right? Which is right here. Then you click on Git web. You scroll down. There is an e directory under the tree. You go to open.com. This is where you will have your two files, the Yaks ML files and the environment files which will help you to create your stack, okay? And then move up if you're interested. These are the two heat templates related to the two demo, the VIFAR rules and the VLoad balancer for the VDNS. I don't know if I can show more. Yeah, you can show more. So I want to go to the wiki for a few minutes because I was telling you we start with the basis of the e-com platform which has been open source. But there are a lot of activities, especially last week. We had a major event at the on-app convention organized in Middleton. And I really invite you. You don't need to register. It's open publicly. I really invite you to go to the presentation because if you're interested, how we will move and merging e-com, open source and open O. I really invite you to look at the architecture evolution slide. You will see the different phase. You will see how the two platforms are converging to be one platform called on-app. We are trying to answer all the questions that you might post, in particular on the wiki. But we have also some distribution lists. So if I go back here, if you have any question, you can always subscribe to any mailing list which are available again through the wiki. We, in particular, the mailing list which is on-app discuss. And it has an ID. If I go quickly there, a lot of discussion on going because we have certified using Hackspace which is our flavor. But we are certifying also on Valina internally. And a lot of question about OpenStack because people are trying to validate on Kylo or Cata. So you just need to subscribe. You pose your question and you will see it's not only the original from data of the community but a lot of people are really discussing and trying to ensure that we are moving fast on the on-app community. Okay, thank you, Catherine. We can take a few questions. I also like to mention that on-app is not just about the private cloud. It can also support public cloud like Rackspace or AWS. The demo what Catherine showed it's actually hosted in the Rackspace. And also, as I stated before, it is not just for the telco. It can be used by anyone else because it's an open source. And any enterprise or any data center where the cloud has been managed, it can be used to support any type of workload. Go ahead. All right, thank you for the presentation. My question will be from the VNF lifecycle management perspective such as backup and restore upgrades and migrations. Does on-app still depending on each VNF vendor to handle these through their EMS or on-app totally will gonna skip the vendor EMS but perform these by itself? Thank you. Well, when we will have a chance, I would like to invite you to look at the Wiki because we are currently submitting different project which will be reviewed by the TSE board and finalized by June 1st. And one of this project is about all OAM aspect. And I expect that all these operational aspect will be covered including the backup and restore. That's an initiative which is presented and we hope it will be approved by the TSE. Thank you. Go ahead. Can you hear me? Yeah. I'm Satin from Nokia, so I had a question. You said that it can also do IT. Do you have IT cloud management support as well in on-app? It is very generic, right? So end of the day, the VNF itself is a VN, right? You hosted a web application like a load balancer or any type of IT oriented application. It doesn't matter from the on-app perspective. As long as you define the application, then you can definitely deploy through the on-app and open stack or even with other clouds. Yeah. Let me qualify the question just real quick, right? So do you do charge back multi-cloud management public, private through the single-penal class? Such as, let's say, manage IQ-dust today, right? It's another open source. Yeah. So charge back, I think, Catherine, I'm not sure whether charge back is supported in the on-app as of today. No, it's not yet supported as is. But again, you are welcome to submit any type of project if you believe it could be an interest for the on-app community. Go on the Wiki website. There is a section where you can submit your project. And why not? You will be joining us soon. That's the power of open source, right? So you can ask for something, you know, which you'd like to see in the community. Go ahead. You mentioned on the what's next plan to have a working group for integrating open stack into on-app with on-app as a sub-plan. Where do you plan to do that? OpenNFE in OpenStack community itself, or how do we, how can I practice? We are looking both to OpenStack community as well as OpenNFE. We are also talking to OpenNFE team. We are part of OpenNFE too. We are actually asking both the community to come together. So testing perspective, you know, like I think OpenNFE may be a right place to do the testing, but also have a sort of like exchanging the thought process between these two open source community. That's definitely need to happen between directly between on-app and OpenStack community, but the testing could definitely happen in the OpenNFE. Okay, thank you. Go ahead. My question was about the actual resource management itself. Does on-app plan to keep track of the actual hardware resources? Because the moment we talk about orchestration, we get into this deletally area, especially when it comes to VNF, there are specific hardware requirements. So how does on-app plan to address that? Does it plan to maintain some sort of a database such that during the design phase of, I think one of the things was about design, designs the system itself, that it can look into that database to see what hardware resources are available and how do I place these? It's really handled in two layer. One layer is on the infrastructure layer. So let's say that we use OpenStack for the orchestration. So OpenStack have to make a decision where to put the workload in, but also there is an Uber orchestration layer in the on-app that allows to decide across the multi-data center also. In the OpenStack, we have enhanced some of the code, which we are working with the community to put that back in, which ties into the inventory system what you're talking about, and it can do more than currently what Noah is doing today. And it can take a policy based. You can say that this particular rack has some specialized hardware, and we like to do it. And there is actually a talk that AT&T team is providing about the profiles and how does that relate to a hardware. So there is a support that we have enhanced it. We are still working with the community to put that back into the OpenStack community. Thank you. Follow up on that? It would also take care of my migrations, I would think, based on... Yes, that's the intention. Yeah, thank you. Peter from Ghana. Actually, my question again is how you position, compared to the ETSI manual kind of architecture, how you position the OpenStack? I'm not sure I followed the question. Can you please repeat it? So you actually position OpenStack as a beam, or it's a position as a manual authentication layer? It's definitely... It's a two different application coming together. There is not really any shim layer between OpenStack and Onap. We like to use the API, which are natively exposed out of the OpenStack through the API or the heat template. So there is no shim layer. It's really using the API, whatever the community have derived, and Onap can talk through that API as to interact with the OpenStack. But if you have additional questions after the talk, we can discuss in detail. And if you look at the controllers, they are based on Open Daylight. So that's why we are doing the connectivity with the OpenStack as well. It could be the network controllers, or it could be the application controller. They rely on this framework. And the follow-up question is, what the status now between the app and you do... I mean, what of development on top of that? Is a question as like whether we have to do some enhancement in the OpenStack to use the Onap? Currently, we use as it is. There is no specific enhancement. The demo what Katherine was showing, there is no special enhancement really needed. As I stated, it's all API, whatever the native API. And if you're willing to use the heat template, it can be used as it is. There is nothing enhancement needed. It's a two different platform coming together using the native API. So you don't need any special enhancement. But if there are any additional requirements from your cloud perspective or a tenant perspective, that's something you have to do it yourself within this layer. But the Onap and OpenStack itself doesn't need anything extra to interact today because that's already part of the solution. We can have some configuration change, some parameter change, depending on the OpenStack flavor. And that's why we're also looking at the community because we could not validate all the OpenStack version. If not, we never know. We don't divide that some change had to be made. Of course, we will consider it. But so far, we have noticed it's more about configuration change, especially regarding the service orchestrator. We're triggering the controllers. And as I told you, the controllers have the running base on the framework of OpenDelight. Thank you. Thank you.