 OK, thanks everyone for staying with us up till now. We have an exciting panel coming up. So if you just take your seat, we'd like to begin. The panel is going to be reviewing the status, the future, and the evolution of the cloud-native infrastructure for telecom. We talked about the cloud-native network functions today a lot and how to build them. And I think with the latest presentation, we started digging a bit deeper into the infrastructure. Now we're going to take a head plunge and speak about the infrastructure itself. So we have a team set of panelists today representing huge telecom operators from across the globe. So we'll quickly start with a round of introductions so you know who's who. And then we'll dig into the questions. So Yoshi, do you want to start? Yes. So good afternoon. I'm Yoshihiro Nakajima from NT Dogomo. I'm working on the NLP platform development and the standardization in the network budgetization, including the core and the radio side. So I'm also the chair of HNHP. So thank you. Good afternoon, everyone. So my name is Fili Ponsage. I'm VP software engineering at Torange. So basically, I'm driving the cloud-native telecom transformation for range. And if you have some question about the project Silva that had been quoted early in the Oversession, don't hesitate to reach out to me. Yeah, and I'm Paul Gonsun from Tenenar, director of the cloud strategy and architecture across both the network side involving centralized public clouds and edge type of workloads and also the IT. For now, I'm also the vice chair of the open source model project hosted by Etsy. Yeah. Hi, good afternoon. My name is Katsuhiro Horiba from Southbank. I'm director of the network research office and the research institute of the Southbank. Our team is conducting the clarification of the mobile core. We will lift the 5G core on top of the public cloud. And we are also trying to figure out the 60 mobile core as our research. Thank you very much. Yeah, thank you. So first question is, I would like to get your opinion. What do you see as the current status of telco cloud infrastructure or cloud-native infrastructure? And specifically, do you see convergence toward a single architecture? Or would you say there's still quite a lot of fragmentation with each operator choosing their own architecture? So to be Katsu starting with you. OK. Yeah. Unified house environment is ideal for the telcom operators. But yeah, we have not to be there at this time. So because there are lots of the requirements for the individual network functions, like IO intensive or something like legacy protocol support, like SCTP or PFCP, so we need to have to adapt multiple CRDs at this time. So this is a reality at this time. Yeah, I also agree to that. And I think it has been highlighted a lot throughout the presentations in this session as well, both from Ericsson pointing out the view from the network function vendor. And I think Filip from F5 also pointed out some challenges there. So I think I also see that there is still some fragmentation. And putting it also from a bit from an operator perspective is that due to different needs from the network functions as well, we often see that many of the network function vendors kind of insist on bringing their own cast or Kubernetes platform, which is very tailored to their application. And why they do that is specifically what you said, like there are specific needs on the networking, looking at the technical side, and looking a bit away from the other side, which is more the political side. And that leads to fragmentation, because those platforms or those cast layers are usually not supporting multi-vendor applications, is what we see. So yeah, that's a bit on perspective from my side. I would say that by design, the ecosystem is fragmented, because basically we move from a very vertical integration model to something that is, I would say, much more horizontal. If we just consider what are the different flavors right now, we have, I would say, the cast from the network vendor, the cast coming from the IT players. We have the hyperscalers. And we have also the open source ecosystem. And for instance, the industrial grade, the cloud native telco silver project is one of this example. At orange, what basically we are trying to do to push away this complexity is to bet on an industrial model and operating model. Basically, our industrial model release product that rely on technology. It means that we have two rows of abstraction. But at the end of the day, it's only consistency toward our customer, because basically, our job is to set up, build and operate the core infrastructure that is used by our affiliates, by our country. So we want to keep as much as we can, the consistency toward our customer, and to manage on our side as much as we can, basically the complexity. But right now, we are really investing about the industrialized, I would say, flavor of the silver project for the cast side. So I'd like to say that two aspects from the fragmentation of the cloud native are like from the ecosystem side, I think it is a good healthy, the diversity of the container environment and it's much, much important. And I think from the standardization point of view, fragmentation is like a nightmare for everything. Like lots of organizations try to realize the cloud native things. But in terms of the consumer of our specifications or discussion on how to realize the Tereco cloud native, it's lots of choice and lots of combination that it's very difficult to proceed the huge deployment scenario. So it's a very difficult question for us. Yeah, so it sounds like there are quite a few challenges in getting to a converge architecture and we'll soon talk about how open source and standards can help. But before that, we said cloud native so many times today but we didn't really stop for a moment to define it. So I'm curious about what you see as the main concepts in cloud native is it's just putting everything in Kubernetes. What else is there to become really cloud native? So maybe whoever wants to take it or buy order. Okay. So in terms of the cloud native philosophy, so of course we are in the telecom network. So we are trying to provide a much large scale and this truly distributed cloud infrastructure for mobile network system. And in terms of the cloud native, so we need to provide a much higher level as they are using the flexible and the operation much efficient efficient operations using the container orchestration. And in the same time, so we are trying to realize a fully automation regarding the deployment and the operations and the maintenance. So that is a much, much important because from the 5G era, so we need to provide a lot of the network, so customized network to the customers. So that is a diversity for the use cases and lots of things, but using the manual or the human operation, it's not enough for the diversity network. I think the cloud native is a one key success to push towards that future networking. If I may compliment, and basically if we only focus on Kubernetes, we won't succeed because if we have a cloud native infrastructure without cloud native services, what did we get at the end of the day? So I think that for me, the topic of cloud native is much more heuristic. So okay, it's about the runtime, but it's also on the way we want to design the network function the cloud native way. So the microservices, the loosely coupled to power, I would say the residency, the self and autoilling, the closed loop reconciliation, the drift management, all the principle that need to be implemented at workload level, if we want to have a full stack cloud native otherwise, at the end of the day, we will have extraordinary, I would say cloud native infrastructure with nothing running on it. So I think we must think the topic much more holistically. Yeah, I also agree that looking at the runtime and question if it's only Kubernetes, I think we see many examples that are other things coming in as well. Like I was in the edge session earlier today seeing how like Docker containers are being provisioned and managed with Podman as an example, right? People are talking about WebAssembly. There's a lot of things coming in, but I think I also would like to focus more on the bigger aspects of the cloud native because I would say Kubernetes is a key enabling technology and definitely needed. But it's, and also Ericsson pointed that pretty well out in their recommendation, focus on the people and the processes. Because in my view, cloud native is, it is really about the people processes and also the tools of course, and how you kind of move that culture, which is the biggest challenge at least for I would say operators with a lot of legacy, but also for other industry with a lot of legacy and even for the network function vendors. So that is what we really need to move in my view. And ultimate goal of this cloud native perspective that we would like to reduce the operation cost and have a agility for the business. And the Kubernetes provide us the something like intent-based operation to manipulate the, you know, this kind of the services. But I don't think Kubernetes is the only way to replace this kind of services, but to be honest, the best practice of this kind of services is Kubernetes at this time. Yeah, so we heard about the goal of adopting cloud native technology and we're in an open source conference. So the obvious question is, what do you think about open source technology and is it applicable to telcos in their transition to cloud native? And naturally, how do you deal with things like SLA, security, reliability when you're using open source for telcos? So what is your experience indicating? Is it applicable? Is it not? What have you learned? I can start. So I think definitely it's a very, very good question. I think that there is two things that I want to highlight here. The first one is that when we are moving into this horizontal model and basically what we are, I would say, months after months expecting or observing is that the network function vendors are transforming themselves as software vendor. And what happened 20 years back in the IT is quite simple when we have the disaggregation of the hardware, the orchestration and the software. New rules and new job appears in company. It was the role of doing the integration and who will take the accountability. It will happen exactly on the same way at operator level. And when you are in front of, I would say, this situation, you have different, I would say, flavor to do the implementation and open source, definitely, is one. And from our standpoint, what we are observing is that you cannot only choose open source. You need to be part of the ecosystem. Otherwise, you are absolutely notable to answer to the challenges that you raised, Rani, about the SLA, about the security, about all of this. So only consuming without being, I would say, deeply implied into the ecosystem, I think, cannot be successful. And something that is perhaps interesting as well to understand is that, OK, you can rely and you can bet on your own, I would say, teams. But there is also very interesting company that are highly focused into open source that could really support you in the critical, I would say, SLA or security topic. So what we need to understand is that it's like, I love the answer that I've been given previously by the team of Ericsson. It's not good or bad. It's different. And I truly think that it's the case as well for open source. Great hearing about open source contributions. That wasn't planned. Yeah, to comment a bit, now I have fully open source. We are using it a lot and I think it's extremely important. Only question if you just can take it straight down and consume it as is. I would say, yes, in a way, but there's always a but. There's quite some things that needs to be done, right? Putting in place, networking the right way, the service measures, and especially on security, are you put in place that zero trust, orchestrating the security policies for the pods, right? Ingress firewalling and all of that. So there are quite some things that needs to be done. But in a way, we can take down the Kubernetes downstream and do that ourselves. But I think a challenge also for the operator is like a telenoor operating across many markets in North Europe and also in Asia. You see that in order to get there, we need the capabilities, the people, right? It becomes kind of some system integration. And that's something which is not that easy to get in place always. That's why we often rely on using vendors to get there. I think the open source can provide the individual parts of the system, but we would like to have something like a guru code to integrate them to have a single system. So I think Rani told that we need to have a blueprint and test cases or something like that. We'd like to share those kind of information. That is, I think that is a leverage to the open source technologies and open source communities. So in terms of the reliability and the lots of aspects of SLA, I believe the open source is one success for everything. So back to the 10 years ago, so every people said, Telecom said, so we are going to deploy the open-stack-based system for one network system. So now, so many operators are running the huge amount of the open-stack environment. So now, so we are trying to realize the container or a cloud-native infrastructure for Teleco. So using the, of course, the complexity is different and the requirement of the networking is different from the IT and enterprise use cases, but many trial or many development, so much development of the cloud infrastructures and the application itself is a much, much important to modernize the Telecom network system. So, yeah, of course, in the past, the each and the protocol between the network node, it's a dedicated team, even in the SS7 or the SCTP, lots of protocol are developed by in-house, but now the many communication between the instances and the network node can be realized using the framework. So that is a much efficient development style. So I think such the modernized development style and the deployment style will be the much beneficial for the operators. That's why I believe the open source. Yeah, that's actually a good segue to my next question because I wanted to ask about, you mentioned protocols and I wanted to ask about standards. So we talked about open source, but what is the role of standards and how can it help operators adopt cloud-native infrastructure and CNFs faster? Okay, so I'd like to say the such issues from the standardization point of view because I'm a chair of the CNF specification. And yeah, of course, the 4D specification or standardization work, it's a much, much difficult in the past, like in the 3GPP, they try to standardize the protocol between the nodes, but our orchestration and the management interface is a very sticky to the implementations. So we cannot survive without the open source technologies because the container is a major, the Kubernetes is a major orchestration for Kubernetes. As well as the OpenStack is one of the major the supporting of the VM. And of course, like, so we need, so the standardization point of view, so the standardization need to work with the open source like a silver and the Nefio and the Anuket a lot of the open source project. So we try to reorganize the discussions or how to promote the implementation for the future and the infrastructures based on the open source. I think that when we are talking about standards, I think that the topic of open standard is perhaps something that I really love because I would say that it's the way the CSPs are working together to make this happening. Just two highlights. I don't know if you read the cloud native manifesto that have been proposed by the NGMN. We released this on September the 13th. So it's a page basically from CSP standpoint about the why our industry need cloud native telco. A second one that honestly for me is very, very good. It's the one that have been released by the CNCSNF conformance test on last Friday. A very, very good, I would say, white paper that is I would say giving an observation of the pain point that we are now daily life observing and bringing action point. And something that is very interesting about this open standard move is that we have very well understanding from network vendor standpoint. On the discussion that I would say on weekly base, I think with the users who's paid we are all working with. Honestly, they are quite very positive because at the end of the day, it's lower down I would say what they are implementing in a specific mode and we all know that what is specific is what is killing us. And I would say the vendor as well. So yes, standard but open standards it's even more important for me. Yeah, I totally agree with you guys. I think the reference code, working code with the standard is very important. Previously we got open stack Taka project that is something like a sample implementation of the NFU all. I think that kind of the relationship is very important to show and to prove this specification can be used to this real operation. Yeah, no good points made. I think like you can see it from first on the HCNNV, I think HCNNV have always been kind of supportive of open source and also put in place, what should I say? Modeling and the framework for open source to have its life into HCNNV. I think that's a very good thing but I think on the whole problem we can look it from three perspective. One, you make a standard and then an open source project implements accordingly, right? Second way is that an open source project goes first and then you create a standard based on it. But I think the third thing is that we need to collaborate. And I think we are doing that to some extent between standards and open source community. But I think that is something we need to increase significantly to make that this really fruitful and speed up both on the standardization side, right? And also helping the open source and propelling that. Yeah, I think this is something we're trying to do from both sides, both from the open source communities and the standards organizations. Sometimes with bigger success, sometimes less but we're striving to improve there. And I wanna kind of switch gears and with my next question and it wouldn't be a panel without a question about AI. So let's throw that into the game. With all the things that we've seen and learned in recent years with telco infrastructure and NFV and the transition with hardware software desegregation with standards like Etsy with open source. Where does it put telcos in terms of running AI workloads? Do you think that we are ready for that? Are there advantages? What are the challenges for running AI workloads on the telco infrastructure? I don't know. Maybe Paul, you wanna start? Yeah, I can put some, at least some reflections on this if you say, so we have the GPUs and everything. So if we want to do the hardware software desegregation in order to abstract, right? So we need to put some abstraction in place to make that happen. I think GPUs and GPUs and other accelerators are really, really, really there to be super-performance and optimized for what they are there to do. And then when we put an abstraction on top then there's usually some hurt or hit on the penalty on the performance. That is the challenge. We had that for the in virtualization with the NICS, right? We use SRIV and PCI-Pastro. And there's a penalty on the performance that hits. And so how to get around that when doing the desegregation is a big question to me. But if we get there, then it and still maintain the performance. That's a huge achievement so that we can get this interoperability also in that space. So that's just a point on that. On the need for AI, ML and everything within telco, definitely there's a lot. And I think we have seen some examples earlier today as well in one of the lightning speeches on use cases there. So I mean, that's really coming full speed as we speak. Yeah, ultimate goal of our activity, we have a dream to host the, you know, both network functions and something like application for the business like edge computing on top of the single infrastructure. That is the dream. And but currently the AI machine learning environment is something like a very tight couple environment between the hardware and the software. And I know that those kind of use cases, very extreme use cases in this era, but we would like to integrate those kind of use cases on top of the single infrastructure. I think this is kind of the physical infrastructure management activity that is something like a new activity in XENFE at this time. Yes, and those activity can be applied to the, you know, acceleration hardware in the virtual radio access network system. So we would like to conduct, to abstract those kind of the acceleration card on top of the single infrastructure. Perhaps to comment on my side on this, the first topic is that no AI won't happen if we don't get the data. So perhaps that from a standard of specification, if we can get a standard as a way to have exporters to have the data in, I would say, a very easy to integrate way, it would be a very good starting point to do the impulse of what Paul introduced. The second point about AI, when I reached KubeCon, I was curious about how much gen AI we will have during the weeks. And I did the math and I found something like 14 or 15 different talks. So I would be very curious to know what won't be the number down the coming years, but what was very interesting is that the first case of gen AI was about observability and to use natural language basically to browse our data model, to have, I would say, real-time understanding of what's happening. So definitely I would say I'm quite excited by observing what will happen in this area because I think that it's certainly where we need to have the best answer on our challenges. I think the AI ML is a powerful tool for the operation and the design phases. So we have tons of the configuration everywhere. So we need to verify the human scale. It's not good enough. And we need to automate such a processing as a checking processing using the AI ML technologies. And of course, so we need to run the multi-bender environment using the application level to the infrastructure level. So that means we need to set up the huge data system to proceed the huge log and the configuration data to the one places. So that is the big challenges for the AI ML usage. Yeah, sounds exciting. I do have two more questions, but I wanted to give a chance for the audience to maybe ask a question or two. So if anybody in the room would like to ask our panelists anything, we have a question there. Yeah, so, okay. Yeah, so I have kind of like a business-related question. So I worked for 56k cloud. We're like a small system integrator that kind of helps telcos, adopter, native and software. And one thing I've seen fundamentally really difficult in this industry and it's also like in the OT industry is it doesn't pay. So if you look at the GSIs and SIs and their delivery of like say their vendor or partner solutions, and I don't want to be name-dropping or anything. So if you look at like a lot of the telco industry is dominated in Europe with two major telcos, the delivery of that is in a large extent actually where they get the money, where they get the revenue. And now if this is gonna be getting re-stacked from an open source, one of the limitations is that suddenly, you know, the telco could do it or a large GSM. So if we want to get like an adoption going, I think we need to be working with like the people who are doing RFQs or P's. So it really gets written in there in the kind of RFQs or P's that people are writing towards that says we need to see CNF function there. We need to see open source, not just standardization because usually like if you look at GSMA or 3GPP, the standard exists and then people develop towards it. We're here talking about let's develop a library and the people gravitate towards that and they get critical mass in the open source industry. Oh, and then by the way, let's make a standard so we can have some governance. And in the telco, it's the other way around. So, you know, I just love to get your opinions on, you know, how do we address that? And also how do we address the legacy, you know, system integrators where licensing and all that complicated delivery and maintaining, you know, the telco stack for their operators? You know, they're not cannibalizing that existing revenue, you know, going open source. Who wants to address that? Yeah, I see a good question. I think back to the question on if we are asking for open source, we do definitely into the, like whether it was open stack back in the days or Kubernetes now, that's something we ask for. I know it's not easy and we have been struggling with this for, what should I say, eights in how many years it has been since we started moving into closed for network. But I agree, it's a lot about how you put in place what should I say, the service agreements like split the responsibility and the operating model around it when it comes to integration because if you use a system integrator, the system integrator has a big, big role. But the other actors, including the operator, the, what should I say, the cloud platform vendor and also the application vendors in a multivander setting has specific roles into making it successful. And then we are back also, which was discussed earlier today a bit on the blame game, who is responsible for what, so it's not easy. And even if you ask for open source, I think a bit back to what Philip from F5 discussed in his talk today is that there are situations where we are maybe lacking something and who is responsible to deliver it. So it is still challenging. And that's also a bit back to the first question I would say on the fragmentation on the cloud platform parts, just to put some reflections but it's a very good questions that we are dealing with every day. Anyone else want to answer? Because I see we have another question. Hi everybody, Rima Jontel, I'm from Redhead. So my question is for all of you. When you have a CNF vendor who comes to you, and you have a platform, Kubernetes platform, based on open source, and your vendor tells you, I can only run if my application is the only one running on your platform and I need root privileges for everything. And by the way, I have a service match that's gonna be incompatible with the one you're already running. So you need to uninstall it, et cetera. What do you do? How do you have that conversation with your vendor? What do you tell them? Do you accept it? Do you push back? I'm just curious. Thank you. Completely theoretical scenario, right? Never happened. Yeah, I think so in our case, so we need to host the big technical debate how to integrate the application itself on the platform. So because the vendor expects, so vendor has some prerequisite limitations or conditions for everything. And we need to combine our requirement and they are set to the one places. And yeah, of course, yeah, you are the Redhead. And some vendor expects that infrastructure will be another Redhead. So that means the migration cost is a much huge. And so we need to discuss it technically. The historical situation you depicted is really real, I guess. I think that what you expressed is the reality of what we are having today. At the range of what we are trying to do is to anticipate the topic and to, I would say, proactively reach out to our partners and discuss with them the kind of new requirements we are expecting. And we are perfectly aware that it won't happen in a night, but if we won't proactively start to say we would prefer to work like this, like this, with these kind of requirements, nothing will change. And at the end of the day, if I would say we are proactive and if, I would say, on the global ecosystem, it's a win-win relation, I think we can expect some progress. But unfortunately, the situation you depicted is really real. Thank you very much. I'm sorry, but I think we're a bit over time. I overruled and said we could go one last question. One last question. We've got plenty of time. Thank you. Okay, here it goes. I'm really fascinated by the difference between standards and patterns, right? Growing up in service provider, everything's a standard, and standards just get more complex. And it gets really complicated when you're doing standards at the leading edge of something because you're often guessing about where things are gonna go. Patterns, which is an interesting concept that comes up a lot in Kubernetes and anti-patterns, are more about general approaches and having things like manifestos and things that say these are the set of principles we believe in. I'm wondering what you guys think, are we capable of somehow reinforcing patterns without having everything to become a standard? That's a good question. I think back to the patterns and also principles, what we do for the cloud platform, at least, is that we have a set of strategic principles that we follow. And then in fact, one of them is not standard-based, but it's de facto standard-based because we realize that if there is a good standard, like say for something which needs specific interfacing to business critical systems that we have, we are very strict on following them. But in some other areas, then like specifically when it comes to Kubernetes or the cloud layer, it's like there are quite some areas where we don't have concrete standards, like where we follow the open source and then that involves, in my view, following the patterns in the open source as well. So it's a very good question and so I'm more into that line. But not for everything because we don't have open source everywhere, even though I would like to have open source more places. If I just can complement one, a very good question. I think it depends about the validation playbook on which we can rely. I love the concept of pattern and anti-patterns, but at the end of the day, against what are we doing the evaluation? So if we have an enough consistent, I would say, playbook of tests, fine. But we need to have this validation to know that basically the service is delivering what we are expecting expressed inside the pattern. So for me, it's heavily depends about the way and the means to do the testing and the validation. All right, thank you very much. Thank you to our moderator, thank you to our panel. Yes.