 Thank you very much. So first a little bit of an introduction of who we are. My name is Jonas aren't at work for HPE and I'm an architect over there, but I'm also part of an open source project, ODIM open distributed infrastructure management that HPE is part of as well and a few other companies and participants. And there I'm on the technical steering committee. Martin, do you want to say a few words. Yeah, sure. Hi everybody. So I'm Martin Halstead. I'm part of the same organization as Jonas within HPE focuses on infrastructure into telco. And once Jonas has gone through what ODIM is, then I'll talk a bit about how we plan to deploy and make use of that type of architecture as it relates to deep edge VRAM type implementations within telco networks. Okay, and just to give you go easy on your eyes. We're going to stop the video here and have you focus on the slides instead. So let's do that. So part of this presentation is going to be the ODIM project and I'm going to go through that a little bit before we dive dive into what Martin said, how to use this and other things in a different type of deployment then. Some of you have probably heard about ODIM, but I'm still going to basically introduce it a little bit to start with and there is a certain amount of challenges in today's network deployment for 5G. And specifically with we have focused this presentation on telcos but ODIM is applicable in other part of the industry as well. So in telcos we can see with 5G use cases, there is a lot of push to the edge. There's a lot of new mini data centers. So you have this distributed type of architecture that you need to manage and you need to manage all the resources in those different locations. You also have a different type of equipment in a data center than you have at the edge. So you have this type of heterogeneous platforms. So at the edge you might have different constraints like size, temperature and other specifications like nebs talking about seismic activity as well. And then you add to this that you have different vendors as well. So obviously now if you are a telco or something and you're going to deploy all your data centers from different vendors, different type of equipment. So how do you talk to all this equipment? How do you manage it? And the equipment as you can see here as well has different type of management APIs. Some of them conforms to standard like redfish. Some of them do not. And even if you conform to standard, there are different implementations from different vendors to conform to this standard. And so talking to vendor A's redfish implementation is not exactly the same as talking to vendor B. And it's not because vendor A or B are in any way shape or form in violation of the specification. It's the fact that they have chosen to implement some properties that the other vendor didn't implement or vice versa. So you have these channels as well. And then a lot of vendors have a management solution that is a closed solution that they have on top. And if you think about the situation then that an operator will face is basically this challenge here where you have a lot of things that want to use this and you have different vendors solutions. And at the bottom, you can only manage your own, your own specific resources from that vendor, perhaps, but an operator will need to manage it all. So you get this very complicated management solution that you need to put in place. So this is what we're trying to address with ODEM, basically. So if you look at ODEM, then just to introduce what it actually does. On a high level, we have three things. It's doing abstraction and translation, meaning that these different type of nuances of redfish or different management protocol altogether can be abstracted away from northbound clients. So the northbound client will basically know exactly what redfish version and what properties in different objects to expect. And that's part of what ODEM is doing, this abstraction and does that with an abstraction layer and with different adapters, so we call them plugins. So that's one of the things. The other thing it does is aggregation. And what does that mean? Well, on a redfish level, it basically means that we have an aggregation service. We did put that in place in the MTF when we started the work on ODEM. So that is a fairly new type of service. And that will allow us to add and remove resources that an aggregator would manage, basically, new servers, new switches, new storage and these type of things. But it also means that a client doesn't need to know about the equipment in the data center. It can simply ask ODEM. It just authenticated itself with ODEM, and then it doesn't need to know IP addresses of the resources or any of the credentials. ODEM will just list all the available resources for the client. The other thing is that it will allow a client to do bulk operations. Let's go reset an entire rack or let's let's upgrade an entire aisle in a data center to a new firmware level. These type of operations is also aggregation. Finally, there is proxy. So ODEM can be multi-homed and the client therefore doesn't need to be on the actual management network. It doesn't have to have connectivity to the management network. So that will allow for more centralized functions like composition monitoring and these type of things can live northbound in a more centralized location. So just taking a step back then and looking from a high level what ODEM does, it sort of simplifies the picture so before to a more simple way of managing things. If you start on the southbound side, you have all these different management APIs and you have these plugins or adapters that can do translation from them into the ODEM model. And the ODEM model, there is no special model. It's actually DMTF Redfish and on the northbound side, we have DMTF Redfish APIs exposed. So all the services running there are Redfish services can be communicated with by using a Redfish API. So that's basically the vision of ODEM and if we dive in then to the actual architecture and you see everything inside a dotted line here is ODEM. You can see that the services lay in the middle is standing up different services, account service, event service, aggregation service and so on, and hosts also the model. On the northbound side, you have this API layer that I talked about. And then on the southbound side, you have different type of adapters. And I will go through some of those adapters that are available in our first release. They're also one thing to remember is that they're open source adapters, they are commercial adapters as well from different vendors. And the whole ODEM project is licensed Apache 2.0 to enable commercial use and commercial different use cases. So if you look at the community, then we started fairly recently. It was actually in July where we formed the project just as an unfunded Linux foundation project, not part of any umbrella community. But now we have operated a few months and we want to move into LFN. So we have we have some plans to do that and hopefully we can get in there in March. There was this LFN development testing forum this week and we were we had a track there. So I think there was even today there was a presentation there that you might have seen. We run different meetings. It's a technical steering committee committee meeting every Wednesday and there is a more architectural meeting proposal meeting we call that on Tuesdays. And that's where we discuss bigger contributions into ODEM and these type of things. And there is obviously some Wiki pages and GitHub pages that you can see here. Apart from that we had our first release that I'm going to dive into a little bit that came out on Monday this week actually because January 31 was on a Sunday. Right. So how how is the current contributions looking so HP did a lot of seed contribution initially, but we also have seen a lot of contributions from Intel, there have been some plugins and they're an unmanaged rack plugin that I will get into what that is doing. There have been some they're working on a management BMC type of emulator. We also see contribution from AMI coming in here soon, a composition service. There is still some discussions going on in DMTF about that composition service so we don't know exactly when it's going to land that we hope it's going to be there before the next release so it can be part of the August release. Right. So just going through the releases then here we have just released 2101, which is named after the year and the month. And I will dive into a little bit of release process later on, but in that release, just on a high level. When you when you clone and you build Orem you're going to see that it builds in containers and it produces a bunch of containers where it's running and they're all Docker based. We have started transitioning over to Kubernetes that we have we didn't make it in time for this release so the next release will will see Kubernetes based releases that are built process. As far as plug in support goes, we have two different plugins. There is a generic redfish plugin that can be used for different resources that speak redfish, and I will dive into that a little bit later, and there is the unmanaged rack plugin that I talked about earlier into contributing. We also have a lot of redfish API's for from the different redfish services there that we will go through and out of from the get go in the first release you can do some action then on on or bulk operations on collection of resources. And the services that are there in 2101 are these services and just a little bit briefly on what each service does the aggregation service, like I mentioned, can be used to add and remove managers into the aggregator. Like if you have a new server, you can add the server using post operations on the aggregation service. And you also use them information about credentials and these type of things. And after you've done that, the server will show up as a computer system as well as a chassis and so on so forth. The aggregation service will also allow you to do bulk operations like resetting and these type of things. We have another service update service for for doing bulk operations on firmware. There are other things in the aggregation service that is in DMTF redfish specification to define your own aggregates and to define your customized actions and things of that nature that have not yet been implemented in Odim's version of the aggregation service. So that's something to look for for the future. Another service in in redfish is something that is implemented in Odim as well, and that will allow you to subscribe to events events could be a different type of events exist. It could be alarms and these type of things. So it's something that a monitoring system would be interested in the beauty of this is that you can actually instead of having to set up subscription on each resource in the data center. You can just say to Odim. Hey, what resources do you have? Okay. Take all these resources I have here in my collection and set up this type of event subscription to them. And by the way, here is where you should send all the events. So it's a single operation that will set up subscription for the infrastructure and resources you are interested in or your monitoring system is interested in. So that is quite powerful actually. And then obviously Odim has session service account service session service is there to let you log in and retrieve a token and use that for for operations and account services is kind of just set up different roles and accounts. And then we have a task management concept that which is also in redfish obviously for long running operations. Instead of waiting for returns you can you will get a task back and a task monitor URI which you can use to query where what's going on with the operation I asked for how much is done. And also you can even set up a subscription so when the task is changing state. Perhaps it's done or it failed or whatever you will get an event. So that's also a possibility. And then we have the update service and there we only implemented a simple update so far. And there are going to be other contributions I have some other companies to further enhance the update service. This is an area where you have very different solutions from different vendors so this is a complicated space so it's difficult to make anything that works across vendors there but we're going to get there for sure. And then we have registers obviously the regular redfish registers that you can query and you can find registers like for the alert for the events and things of that nature. All right. Moving on then for just looking at one of the plugins the generic generic redfish plugin. That is a plugin that just speaks redfish. And one thing that Odin project is doing as well as releasing redfish profiles for Odin and we have not done that as part of this release because they're not quite complete yet even though you can find them in the in the source in a separate. Branch and those profiles redfish profile is telling you what properties and what objects you should expect when you talk to Odin. And it's also so that's for northbound clients but it's also telling plugin developers what properties need to be implemented in a certain schema. So so this is this is something that we plan on making sure that the generic redfish plugin will expect all those properties. We're not quite there today so the generic redfish plugin has support for for most redfish operations and but it doesn't really worry too much about this mandatory properties that the profile will put forward. But the thing it does certainly a good job of right now is to serve as a plugin template you can use this and and start building your own plugin and we we obviously we have used it right now for working on an open source plugin for for Dell servers. It's currently going on in the community. So the generic redfish plugin is is useful in that you can do some redfish operations with it. It's not quite fully mature as far as conforming to the profiles, but it's an excellent source. If you want to start developing your plugin on your own. Moving on to another plugin then we have the unmanaged rack plugin. And that is a that is a plugin for racks without managers and most racks don't have managers and why do we need a manager for a rack well there are some objects hosted under a rack. For instance, there is things like contains so you want to go to a rack and you want to ask what what do you have in the rack. And then you can look at the contains of property and and there they should be links to all the chassis that are sitting in the rack right. There are also other objects like locations you can get TPS coordinates. What isle and what row is something in and that is another thing that you need from the rack. So you use all this to be able to extract things like topology information. What role what isle what what floor and things of that nature, and this is key when you do want to set up connectivity, because if you don't have a rack you know that a certain server port is connected to a certain switch. Where is that switch, and then you look at the switch chassis, and you can follow the link to the rack, or you can look at the rack and follow the link to the switch chassis and vice versa. So it's, it's key to be able to model a data center. And this rack plugin was contributed by Intel. And actually it has a management interface that represents many racks and and it has a, it has to save states because when you stick things in into the rack, it needs to log that as in in the contains object so it has a local database. And that is part of the first release. So if you look a little bit then at what is coming up in the next release and this is obviously something that might change, obviously, but these are the plans right now. We have development for a Dell plugin right now. And the reason for that is obviously we want to have multi vendor experience from Odin because that's what Odin is all about. It lives in its own branch right now and there is collaboration between Intel and HPE. We are hoping to attract other contributors here as well. Obviously, and then there is the work on the BMC emulator that is going on on the Intel side and as I mentioned before, we have this composition service that AMI is looking for. And the one thing to mention there is that out of the box current composition serving service in DMTF redfish does not address things like connectivity and those type of stuff. So there is a lot of discussion. There is actually a task force go inside the MTF looking at the composition service. And I'm confident that something really good is going to come out of that and we are providing input from ODM and and so on and so forth. So once that has landed in the spec AMI can can actually start doing the contribution to the project. And then finally, we are looking at the Cisco ATI plugin as well. We are dealing with networks in on a fabric level inside Odin. So we have DMTF redfish has a fabric model that has been enhanced lately to address things like, you know, Ethernet networks, Ethernet fabrics. And today there is a commercial plugin from HPE for a Rubab based fabric, but there is nothing in the open source community. So this is what we're looking at to get there and it's more on a in a discussion phase right now. No, no code has been dropped or anything like that. But it's something that we are looking forward to seeing in the project very soon. So that's, I think, for the release. I just want to also mention a little bit about how we do releases for the first year, because we are, we are expecting that we will have more contributors in the future and more participants. But for the first year we're planning on two releases. The first one just came out. The second one will be in August. We don't have any maintenance releases right now. And that's probably how it's going to stay. So if you have some problems with 2101, you log, you can log issues and they will be fixed, but don't expect maintenance release from the project. Instead, there will be a new release that has addressed all those in August. And as we go on, if we look a little bit at the branching model, then the development branch is just continuing all the time and you can add features and functionality there. But two weeks, four weeks before the release, we have an integration period and that's where we take longer running projects or sub projects or features, larger features and integrate them into the development branch. Obviously for any type of small feature, there are feature branches as well and they will come and go. They will go into development, not specifically in the integration period, but actually it could go in all the time. But it's the larger ones that will go in during the integration period. And then we, two weeks before the release, we start to release branch and we release RC0, RC1 and so on, depending on how many, many issues we can see and get. One thing there is that we don't fix minor things in the RC period we fix basically bigger showstoppers. So that's how the release model looks right now. Obviously, once we make it into LSN, we might review this and if we get more participants, we might not step up to three releases per year type of approach instead. Was there any questions here? It's a good opportunity to look at those right now and Martin, you perhaps you already addressed them. So then Martin, I think this is where I hand over to you. Sorry. Yeah, well actually just before you do, there was a question on how do we support streaming telemetry in Odum. Ah, so we don't have the telemetry service right now. So that is being looked at and I haven't listed that as part of the next release. It might show up then. And, and as far as stream telemetry, I don't know, we are, we are still kind of discussing this a bit in the, in the project and I just want to say that I think there are different views on what the telemetry services should address, you know, them from different participants. So, which is very natural. So we right now we're having a discussion and we want to, we want to land on something that is useful for everybody. Stream telemetry is obviously one thing. Getting those reports as part of events, telemetry reports. It's also another use case. So we're looking at all, at all that, but we haven't really taken a lot of decisions. Martin, the telemetry service has not even been approved, but you see if I remember correctly, right? No, not yet. But yeah, I mean, it seems to be a fairly common request from, from a number of operators. So it is going to become an area of focus for Odum. Yeah. And then we have two other questions here. Martin, do you want to take the, the one view question, perhaps? The one view question. Yeah, sure. So, so HP one view that the northbound interface from Odum is fully redfish compliant. And so from the perspective of one view, we're working with that organization in terms of them exposing a redfish interface southbound from that stack. So, you know, that so it's in the plan to support one view. But, you know, it means working within our business units to, to get that done. We, we're also in discussions in the same space, you know, outside of, you know, an HPE product. But, you know, within the Odum project itself, you know, as Jonas mentioned earlier with companies like AMI for their composition service. And, you know, from that perspective, you know, that there are other solutions out there as well that are part of the project. So, shall I say, shall I say a couple of words about SNMP? Yeah, please do if you like. I mean, there is currently no SNMP support inside Odum. There is nothing preventing you from setting up a plugin for SNMP and translate it to redfish events, of course. Yeah, and that's what I was going to elaborate on a little bit, which is that we've worked within the DMTF to basically enhance the networking set of registries for the event service within redfish. And so our expectation is that if you look at this set of networking events that are out there and there are now fabric, actually ethernet fabric focused ones. The expectation would be that you should be able to translate SNMP based traps, you know, if it's coming from networking equipment into redfish events through using the events and the, you know, the standardized registries. So that's how we see things, you know, developing on the networking side of things anyway with the support of things like SNMP. Andris had a follow up question there, Martin. So SNMP pooling. So, yeah, we'd need to focus on that as well. I mean, in the first implementations that we would have, it would be a one-to-one translation rather than pooling SNMP. But yeah, we'll look at that. Very happy to have a follow up on that, Dianne. All right. If nothing else on Odim, and please feel free to ask again while Martin is presenting, we can always jump back. But we have now a little section here to dive into deep-edge deployment that Martin is going to present. So Martin, let me know when you want the new slide and so on. Yeah, sure, we'll do. Thanks, Janis. So yeah, so we'll start off with, and these are just some of the ideas and direction that our company, HPE, has in terms of how we want to approach the market for the support of, you know, virtualized network functions as they move outside of core data centers. And so a lot of this, as you know, you know, there's a huge industry buzz around disaggregation. And, you know, the idea really that telecoms operators take fine-grained control of their infrastructure strategies, right? You know, move away from vertically integrated stacks that, you know, traditional networking, you know, telecoms equipment composed of, right? Now, the operators are actually fairly sophisticated in what they're doing with this. So it's not just about the, you know, the disaggregation of that physical infrastructure in terms of architecture disaggregation. See on the left-hand side of this picture. But also how they actually organize themselves as well has quite a large bearing in terms of how companies like ours and obviously others would need to present, you know, their solutions. And that's down to the way that those offerings are procured. So typically, you know, you would get disaggregation on the procurement side. So you have various teams within the operators that are primarily focused on just buying things like orchestration, the network functions themselves, the infrastructure, all these separate activities and actually separate parts of, you know, the procurement arm of an operator. So again, that has a bearing in, you know, not just on technology but also, you know, commercially how solutions are actually packaged. And then the third piece is delivery disaggregation and, you know, for that, we see a fragmentation there as well in terms of, well, obviously there are an awful lot of, you know, telecoms projects that are delivered via the major network equipment providers and we all know who they are. But also, you know, delivery of those projects, you know, in terms of, you know, separation of responsibilities across hardware and infrastructure, but then an overall system integrator, non-network equipment provider system integrated, you know, taking care of pulling the pieces together and rolling, you know, those solutions out. And then, you know, third one which is slightly less common really is the DIY approach where the operator themselves would be responsible for integrating the full stack of disaggregated hardware and software that they had procured. So, you know, taking all of those factors into account. If you want to move to the next slide, Jonas. You know, the thing that we have observed, you know, as a vendor in this space is that, you know, we started in this in terms of horizontal disaggregation. You know, we as a vendor have always gone about things in terms of, you know, the separation of hardware and software. So, you know, separation of, you know, physical infrastructure from things like operating systems, network functions, etc. And we started off in this space, you know, primarily for the virtualization of core network functions. You know, 10 years ago plus where we took, you know, network functions that were originally deployed on, you know, proprietary appliances and had them working on industry standard servers. And we were one of the pioneers if not the pioneer in that space. So, you know, that's, you know, from our, you know, NFV like days that we, you know, we started off with in terms of core networking. We see exactly the same strategy, you know, being successful for us outside of the core data centers, you know, moving, you know, from core to regional to edge and then out into the radio access network as well. So we see that whole momentum in terms of infrastructure disaggregation, you know, moving from those, from the core to the edge with, you know, no let up in momentum there. Next slide. So, so what does that mean for a vendor like HP and obviously others I guess in the industry right that are looking at horizontal disaggregation. So, so to us what that means is that, you know, we typically are an infrastructure vendor, you know, into the telecom space. So, you know, we would look to continue that across computer networking in terms of, you know, HP providing solutions, which would be, you know, our infrastructure with, you know, a rich partner ecosystem for the component sense that would go into that hardware infrastructure, you know, on the on the hardware side of things, you know, so accelerators GPUs, you know, various CPUs, etc, depending on the use cases for them, but crucially as well giving the telecoms operators choice in terms of the operating systems virtualization, the, the infrastructure management solutions that they would use, and how they would be deployed in, you know, not just on compute, but also for networking infrastructure as well. So, you know, so we would see, you know, hard horizontal disaggregation, moving from core to edge with choices for, you know, the virtualization software stacks, network functions management, etc. In a horizontal manner. So, you know, the thing that we are, you know, would never do as a, as a vendor would be to build vertically integrated stacks with kind of best of breed software vendors, you know, and deliver those as package solutions. Obviously, there are, you know, bespoke places where that, you know, that kind of works in the enterprise space, but you know, generically in telco. You know, I think that that would close us off to the majority of the market. So, you know, our remit is all about partnering. And then, you know, in terms of, well, how do we then and then this is why a project like Odom, and our participation in open source is so critical to us, because, you know, if you think that, you know, we're going to have these computer networking contracts in terms of infrastructure, we want to make sure that we expose that infrastructure to northbound clients that need to manage that infrastructure in as open as possible manner. So, the use of Odom in terms of providing an aggregated view of all of the infrastructure inside these data centers, and you know, getting more crucial is you start moving towards the radio access network. And, you know, the multiples of, you know, small aggregation points that you're going to have as part of that, you know, projects like Odom become more and more important. In terms of giving a standards based visibility of that physical infrastructure that's going to exist in those, in those areas so that you can form my cycle management of that infrastructure. Next slide. So, so we see this, you know, primarily in, you know, in the radio access network, because of 3GPP and their functional splits for the radio access network, the implication for that, you know, for an infrastructure vendor like ourselves is that, you know, telecoms environments for the radio access network are going to vary wildly across operators. So, you know, our placement of, you know, compute and ethernet switching in our, you know, in the case of HPE, well, you know, those deployment architectures are going to be very different, depending on, you know, the chosen set of functional splits that the operator has and then obviously within individual operators, you know, there could be more than one split as well. So, you could end up with, you know, centralized, you know, aggregated points of presence where you would have, you know, multiples of physical compute nodes that are doing baseband processing. You know, with associated, you know, low latency synchronized ethernet switching, you know, as would occur in the case of something like a, you know, a centralized RAM versus a DRAM deployment where you may want to collapse network functions, you know, on to individual service. So, so next slide. So, so what that means for a vendor like us is that it means that we are looking at, you know, a number of investigations in terms of how do we build these, you know, infrastructure blueprints for the radio access network and still follow the model for, you know, disaggregation in telco networks, right. So, so obviously, you know, the areas of focus that we have would be, you know, beyond just product selection, you know, which would be based on space environmental and power constraints. So, you know, which would be the right product lines that we want to pull into these types of architecture. It's really crucial to us as well to build a component software vendor ecosystem, just like we've done for the core network, you know, we aim to continue to do exactly that for the radio access network. As well. And so, you know, once we've pulled those two aspects together, then, you know, the work then is all about the integration then of network and transport functions. And we want to make sure that, you know, we, we have those, you know, capability to have those deployed on as few devices as possible. And this is absolutely crucial because of the lack of space and power, you know, the environmental constraints that you're going to have in these, in these points of presence. So, so integration is, you know, is an absolutely key, you know, area of focus that we need to have. And on top of that, the, you know, when you look at integration and the network functions themselves, then, you know, where would you actually want to deploy those network functions, you know, how do you optimize network function placement. Because if you imagine then that you have, you know, combinations of compute and switching within, you know, individual locations, then you should be able to deploy network functions, you know, on the on the correct set of infrastructure be that compute, you know, ethernet switching, etc. And on top of that as well, you know, when you're deploying things like, you know, management stacks, etc. Then, you know, deploy that in, you know, that's, you know, those software stacks. Again, on on infrastructure that makes the most sense so that you have the smallest possible footprint for these locations. So these are, you know, a couple of extremely strong areas of focus that we have in terms of integrated transport network and VRAM functions may be on, you know, individual servers versus reduced footprint aggregation and then the optimal, you know, placement of network functions, you know, across that, you know, there's that aggregated set of infrastructure. And then obviously, you know, through the, you know, our participation in in Iran, then it would be all about well, you know, so we have this infrastructure, we want to make sure that it's open in terms of, you know, how a third party can manage it so so open distributed management of aggregated physical and virtual infrastructure again is, you know, is an area of focus for us. So, so hopefully that gives you a bit of a flavor then as to, you know, the directions that HP is looking at for for disaggregation in, you know, as the industry moves towards disaggregation in telecoms networks and, you know, particularly focused on on the radio access network. Next slide. And so, you know, that ends up then with being a, you know, a value proposition that sort of stems from the heritage that we have already in telecoms networks, you know, there's nothing necessarily new yet. So, you know, we really have competencies in terms of, you know, compliance testing for operating systems, drivers components, etc. You know, there's a strong competency in terms of building blueprints, you know, initially for the core network, but then moving those out towards the radio access network, sets of tool chains for, you know, for the bootstrapping and provisioning of that infrastructure as a service offerings through GreenLake and then obviously the development work that we're doing through Odum and that project. So we see, you know, the combination then of all of those capabilities, you know, allows for companies like ours to have end-to-end telco infrastructure enablement. So that was that was all I was going to present for this piece of it, but hopefully just gives you a flavor then of the direction that our company is moving in in terms of, you know, aggregation and architectures for deep-edge deployments. Yeah, and I don't see any new questions online. So perhaps we're done for today then. Okay. Well, thank you very much everybody. Yeah, thank you. All right. Thanks everyone for joining us and hope you have a great rest of your day. Thank you.