 Welcome to a new episode of the ITU Journal webinar series, where you can find insights and forward-looking research on future and evolving technologies. The ITU Journal is an international journal providing complete coverage of all communications and networking paradigms, free of charge for both readers and authors. This publication considers yet to be published papers addressing fundamental and applied research, building bridges between disciplines, connecting theory with application, and stimulating international dialogue. Its interdisciplinary approach reflects ITU's comprehensive field of interest and explores the convergence of ICT with other disciplines. We count on your support to make this webinar an interesting experience. Please submit your questions via the Q&A channel at the bottom of your screen. All questions from the audience will be taken during the Q&A session after the talk. The meeting is being recorded and the recording will be made available on the webinar website. Closed captioning is also available for this event. You can enable this by clicking on the closed caption icon at the bottom of your screen. We hope that you will enjoy the talk and we encourage you to stay connected until the end for the Wisdom Corner live life lessons. I will now give the floor to our master of ceremonies. Hello, and welcome to the new webinar series with CDOs of the IT Journal on Future Inevolving Technologies. My name is Alessa Magalditti from ITU, the International Telecommunication Union, the United Nations Specialized Agency for Information and Communication Technologies. It is my pleasure to open today the webinar with Dr. Alex Jinsson-Choi from Chairman of Iran Alliance Germany. After the Q&A session is just announced by our avatar, I will moderate the Wisdom Corner live life lessons. So please stay online. Dr. Choy agreed to a personal chat, so he will share with us some lessons learned over the years that might perhaps be useful for some of you. It is now my honor to give the floor to Mr. Seizo Onoe, Director of the ITU Telecommunications Standardization Bureau for his welcome remarks. Onoe-san, the floor is yours. Thank you. Thank you, and colleagues and friends. It is my great pleasure to welcome you all to this second talk on our new series of IT webinars with CDOs. Let me begin by thanking our great guest speaker, Alex Jinsson-Choi. He is my old friend, and I first met him. He worked for LG Electronics, and he became a CTO at SK Telecom and CTO at Deutsche Telecom. Now he is a Senior Vice President at Deutsche Telecom and Chairman of Iran Alliance. ITU, highly appreciate your support. The first talks in this new webinar series are sharing CTO insight on industry ambitions for 5G, 6G, and associated innovations to boost network intelligence. Our talk from MTD talk of earlier this month took us through the company's research and development to advance 5G as well as prospects for 6G to drive new improvements to our quality of life. I will encourage you to review that talk online. Recordings of all IT webinars and workshops are available on our website, great in the archive, very rich in expert-oriented content. Today, Dr. Choi, Alex Choi-sang, will speak from his experience at Deutsche Telecom and Aura Alliance to detail how machine learning contributes to the intelligent control of 5G RAID access networks and how this automation and automation could evolve for 6G. These talks from CTOs are presented by the ITU journal on future and evolving technologies. Our journal embodies ITU's commitment to the public interest. It is unique in publishing papers from world-renowned researchers at no charge to authors and readers. Our journal welcomes research on all topics all year around. And I have no doubt that this CTO webinar series will inspire yet more contributions. These talks will highlight the growing synergy between academia and industry in developing and applying new technologies. Academia and industry stimulate one another's work as partners in research and development as well as in sandbox initiatives to prove the market inviability of new solutions. These CTO talks are certain to uncover new opportunities to expand this collaboration and that are exactly why we have arranged them. I would also like to encourage you to take advantage of the ITU platform. In addition to regular issues, our journal is currently welcoming contributions to five special issues on AI for accessibility, the metaverse intelligent technologies for future networks and distributed systems, satellite constellations and connectivity from space and next generation computer communications and networks. ITU Academia membership is another key avenue for academia to engage in ITU's work. Academia and industry are reinforcing their growing collaboration by working together in ITU experts groups responsible for the radio communication, sanitization and development. Contributions from academia bring greater strengths to the work of ITU and greater impact on research to the mutual benefit of academia and industry. We also supplement our membership-driven work with open frameworks such as ITU focus groups and initiatives like AI for Good, Digital Currency Global Initiative and United for Smart Sustainable Cities Initiative. These frameworks, as well as our open workshops, aim to give everyone the opportunity to influence our work. I welcome you to join us. Thank you. Thank you very much, Noyesan, for your welcome remarks. Now I'm very pleased to give the floor to Professor Iyana Kildiz, Editor-in-Chief of the ITU Journal of Future and Evolving Technologies. Iyana, the floor is yours for your opening remarks. Thank you. Thank you, Alessia. Good morning, good afternoon and good evening, worldwide. I welcome you all to the third season of our ITU Journal for Future and Evolving Technologies webinar series. The objective of our journal was to bring the academic and industrial worlds together in order to minimize the bridge between the academia and industry. Our journal ideas were incubated back in December 2019 and the inaugural issue came out in December 2020. It is an open access journal, no fees for the readers, no fees for the authors. The papers go through a review process and we try to cover all forefront research activities in the world, both in the academia and industry. I would encourage you all to submit your papers and also if you have ideas for special issues, please do not hesitate to contact us. The first two seasons of our webinar series were dedicated to the leading academicians worldwide, where this season has the industrial leaders present their visions. Let me express my sincere thanks to our today's speaker, Alex Jinsong Choi, Deutsche Telekom for accepting our invitation and giving this webinar. I wish you an enjoyable and productive time. Alessia. Thank you so much, Ian, for your welcome remarks. And now I'm very pleased to give the floor to Dr. Bilal Jamusi, Chief of ITU Study Groups Department, who will briefly introduce our speaker and will moderate the Q&A session. Bilal, the floor is yours. Thank you very much, Alessia, and a very warm welcome to Dr. Alex Jinsong Choi, who is the Chairman of the Oran Alliance and Senior Vice President Group Technology and Head of T-Labs, which is the Central Research and Development Division of Deutsche Telekom. In addition, he's a member of the Technology and Innovation Management Board, where he has responsibility for several strategic topics. Dr. Choi has been a thought leader for over 20 years in the mobile telecommunication industry and the consumer electronics by driving forward key strategic and research topics in telco and AI. Dr. Choi was the first Chairman of the Telekom Infra Project, TIP, and previously served as the CTO of SK Telekom in South Korea. With the introduction of Nougu, the first AI-based virtual assistant in Korea, Dr. Choi, was influential in the development of AI solutions. Dr. Choi, the floor is yours. And for our participants, we will use the Q&A to queue in your questions as we go. Thank you very much. Over to you. Thank you, Bilal, for your kind introductions. Let me try to share my screen. Can you see the screen? Yes, I can see your slide. Okay, wonderful, wonderful. First of all, thank you, Ono Esang, for your kind invitations. And also, I'd like to thank Professor Ian and also Alessia. So, hello, everyone. It's really a great pleasure to join you today and speak about the exciting future of wireless communication technologies and the transformative potential of AI native 6G network. So today, I'm going to talk about AI machine learning, RAN Intelligent Controller, or RIC for 5G and 6G. And O-RAN Alliance that I'm chairing now, we have been promoting the introduction of open architecture in mobile networks, particularly in the radio access network. And in O-RAN Alliance, the RAN Intelligent Controller, or RIC, has been developed for RAN automation and optimization with cutting-edge AI ML technologies. Furthermore, an ecosystem around the RIC application, known as XS and RFS, is also being developed not only by incumbents, but also by startups academia. So in this webinar, I will talk about how AI machine learning based on RAN RIC will support the key requirements of 6G. So my first slide is all about the AI native 6G and the role of AI ML. So the terminologies AI native here, it means that the artificial intelligence is integrated into the core functionalities of a system from the beginning of its life cycle, meaning design, developments and operationalization and maintenance. This ground-breaking approach will shape the future of wireless networks as AI native 6G network will leverage AI technologies such as machine learning, deep learning, and now even chat GPT for the design deployment management that I just described. So what is the AI native 6G and what are the key requirements which is shown in this slide? First, we are talking about wireless AI ML, AI and data-driven communication and network design will enable joint training, model sharing, and distributed inference across networks and devices. Second, scalable network architectures. So this aggregation and virtualizations from cloud to edge along with the use of advanced relay mesh and even 3D topologies, if you include the satellite, will address the growing demand for wireless communications. And third is a new radio designs. It means new wave-of-form and new coding for centimeter and also subterrahertz and intelligent surfaces and joined communication sensing and large-scale distributed by more advanced duplexing like a full duplexing and energy-efficient RF will revolutionize the way we transmit and receive the data. Next is the communication resiliency and to end the configurable security, post-quantum security, general trust, and robust networks, tolerant to failure and attacks will ensure the reliability and the safety of wireless communication. The next is coordinated spectrum sharing. So a new paradigm for more dynamic use of a spectrum will leverage location information, environmental awareness for dynamic and adaptive coordinations. Lastly, merging of different words. So the integration of physical, digital, virtual words will be facilitated by ubiquitous low-power sensing and monitoring, as well as immersive interactions that take human augmentation to the next level. So here next slide is the AI native maturity model. So it is important to have a common maturity level framework in this context. We have already seen something very similar in other industries like autonomous driving industries. So here we can think of something similar for our telco industry. So Ericsson has introduced a multi-dimensional AI native maturity model as shown on the right-hand side in this chart. It consists of metrics of five different levels, each representing a step forward from non-AI native at level zero, all the way to highly autonomous at level five. The left column in the table shows different dimensions, such as architecture, collaborations between AI agents, data ingestion, storage and processing, and AI machine learning, model lifecycle management, and security, and last but not least, is a self-ex capabilities. For example, if you look at the bottom part of the table, you see the difference between different AI nativeness levels in terms of self-ex capabilities. To be specific, look at it. Level one, this bottom layer. Layer one has a self-awareness capabilities. And next, level two has a self-diagonesis and configuring and monitoring capabilities. And level three has a self-healing capabilities, which is quite advanced. And level four has a self-augmenting capabilities, nice, which is nice. And lastly, level five has the self-designing and AI driven AI capabilities. So if you ask me this question, so from which level, we can actually call it AI native. So my answer to that question is, you know, we can call it the network AI native from level three. So each of these dimensions evolve across these AI maturity levels, allowing us to have a comprehensive nuanced understanding of our native network journey. So this AI maturity model serves two purposes. First, it provides a baseline assessment of where a telco network currently stands in in its AI journey. Second, it shows us the target milestones for the next levels across the various dimensions. However, it is not, it cannot be the one size fits all. So the target level for implementation is decided based on individual business needs. So what works for one organization, one telco may not work for other telcos. But the model itself is very flexible and designed to guide us toward what's best for our unique business circumstances. Next is AI machine learning driven 6G intelligent communication ecosystems. As I said, AI machine learning will be seamlessly integrated into all network layers and devices. So this integration will encompass the entire 6G architectures, including every protocol layers and also AI interface resources such as radios. For example, if you look at the table on the right hand side, it shows four layers from fiscal data link and applications. It also shows what AI ML use cases make sense in each layers. For example, for network layer in the middle, they are there are energy optimization for recovery and scheduling my AI ML use cases, just as an as a reference. And also the use cases can further include other virtual functions, network slicing, etzy cloud and network orchestrations. So AI ML will be essential in optimizing all these resources and protocols. This holistic approach will pave the way for unprecedented level of efficiency, adaptability and performance in each layers in the telecommunication systems. Factors for considerations in integrating AI ML into your network to fully leverage the benefit of AI, certain there are certain other network environmental capabilities that must be considered. And one is the network interoperability. When we talk about interoperability, we are actually talking about the ability of different systems or components to work together seamlessly. The better level of this interoperability, the easier it is to integrate AI into the system. To give you an example, consider the introduction of multi-agent collaborations of distributed AI into the network management systems. So distributed AI can take data from various points in the network, analyze it and make decisions. So these distributed AI agents don't just analyze the data, they can also work with the network managers at every level enhancing efficiency, solving problems and optimizing performance. So as we move forward, it is important to remember that implementing AI is not a trivial task and nor is not a standalone task. Intelligent and virtualized open-rand. So the RAN Intelligent Controller, or RIG, is a vital component in the implementation of AI native 6G ecosystems. It's a primary goal is to provide a standardized framework for both non-realtime and near-realtime RAN control and optimization with AI ML that can be used across different vendor equipments and various network deployment scenarios. By offering a programmable and modular interface, RIG RAN Intelligent Controller allows for a variety of its AI ML applications, not as accept RAPs. The modules that have AI ML capabilities along with the flexibility to be used in a different ways can redefine how we innovate the resource management features in our radio access network. Lastly, securing a large and reliable set of data is absolutely, absolutely necessary for training AI model training and also integrating into the RIG platform. So that brings us to this slide over architecture of RAN Intelligent Controller, where RIG, as I said, or RIG is like an intelligent manager for radio resources in the network that connects your mobile device to your service providers. Its job is to make sure these resources are used efficiently. What's good about it is it can work with the network equipments from different vendors, thanks to the standardized open interfaces defined by the RAN Alliance like A1 interface and E2 interfaces shown in the diagram, this BG diagram. And it can be used in various kinds of network deployment scenarios. So this RIG is basically comprised of software tools that sit between different part of the network that connect RIG and other part of the networks and that handle different functions. It has a flexible and programmable interface that lets different RAPs accept and access and manage the resources. Central innovation enabler RIG, unlocking the RAN potential with the AI ML both 5G and 6G. So once again, RIG lets AI models or accept RF developers to provide a specific AI machine learning model for a specific use cases. For example, it can help us distribute radio resources and manage transitions between different cells and dealing with the interferences and balancing the workload traffic between different cells and managing 5G, 6G network can be really complicated because we are bringing a lot of network points, new radio frequencies and multiple types of radio technologies including massive MIMO and diverse network deployment scenarios. So the RAN Intelligent Controller is going to be extremely helpful at handling all of this complexity. So getting into more detail about RIG, two different RIG. One is a non-real-time RIG and the other one is a near-real-time RIG. So while the near-real-time RIG focus on the time-critical and latency-sensitive operations below one second, the non-real-time RIG handles a non-real-time task that can tolerate the higher latency here, I mean greater than one second latency. This separation allows the scalability and flexibility in the ORAN architectures. Once again, the near-real-time RIG interacts with RAN elements through the standardised interfaces known as E2 interfaces and also collect gather information and make the decisions based on the network conditions, traffic patterns and even user demand. And you can transmit a comment to the RAN equipment so that they can actually execute such comment. Alright, so the next slide is AI ML model training and learning approaches for non-real-time RIG. So here I'm introducing the deep learning as a specific example. So deep learning is also known as a neural network. It's a type of AI, which is a very powerful for many reasons. First, it has a great feature extraction capabilities. So deep learning is like an auto detective. So it can pick out important information from data automatically. So it means you don't have to spend so much time and effort to do it manually. And second, deep learning will use lots of data. And unlike traditional tools, deep learning performs much better with more data. And this is great because it means we can make the most of the best amount of data available in your mobile network. Thirdly, the learning without labours. So deep learning can work well with the data that's not laboured. So we should let the learning its own. So this is crucial when dealing with lots of unlaboured data. So deep learning can do much, much better job than any other machine learning models. And lastly, multitasking learnings. The information that neural network deep learning learns can be also applied to different tasks. This means less computing power memories will be needed when the system has to do multiple tasks at once. Next is a near real-time rig, deep dive. So as said previously, near real-time rig works in near real-time operating in timescale between 10 milliseconds and less than one second. And the work is based on measurement from individual network brand equipments like radio unit and DU and CU. And it communicates with other part of the systems through the A1 and E2 interfaces. And this is how it gets from the policies from the non-real, sorry, non-real-time rig through the A1 interfaces. And that policy will be used to make a decision in near real-time rig. And it manages network equipments in a very specific areas, again through E2 interfaces. Also, it has a functionalities like a database. I mean, REN, Radio Access Network Database, where it stores important information about different part of the radio access network like nodes, cells, bearers, flows, devices, etc. All related to each other. It's also capable of keeping logs tracing operations and collecting measurement. And also it can communicate with the upper layer orchestration and management system, known as SMO. So, except R-apps. R-apps is for the non-real-time, whereas X-app is for the near real-time. So R-apps in the non-real-time rig framework make rules that can influence how certain things functions. Think of it like, I mean, it's an analogies. Think of it like a backseat drivers. For a backseat driver, they are not directly controlling the car, but backseat driver can provide directions. Whereas the X-app, which is for the near real-time AI machine learning model, can actually do the direct control functions within the network elements. So it's actually an actual driver sitting in a car. So these X-apps and R-apps work at different speeds once again, except work on between the 10 millisecond and one second. So, for example, the non-real-time AI machine learning model, or R-apps, it does the following task. It gives police-based advice to the near real-time rig via A1 interfaces. It analyzes the data, it trains AI machine learning models, it makes inference, influencing for network optimizations, and it provides recommendation also on configurations. So the next slide is an interesting survey results. This is an operator survey. So this survey shows where, you know, the operator priorities. So it tells us the top, for example, you know, on the right-hand side, there is a chart. So these are the operator's priorities for the near real-time use cases. So if I look at the top three operator priorities, and they are optimizing video and video optimization, and the second one is a quality of service-based radio resource optimization, and third one is a dynamic spectrum sharing. So about 80% of the operators who participate in this survey picked these three top three use cases as their priority. So if you look at it, all these three top use cases, all about making sure high-quality best experience are delivered for those killer applications, especially the video. Today's, you know, according to the market research, the video comprises more than 70%, 70% of the whole mobile traffic. So on the other hand, there are use cases that didn't rank the high, but doesn't mean that they are not important. Still, my opinion, as the market, all-in market, or 5G market, further grow, the all-in and also the all-in technology grow, mature, and the other use cases shown here, like a MIMO optimization and traffic balancing and become also very relevant to the operator priorities. So then what is the business rationale behind these three top three use cases of RIC? So the reason why the business telco wants to use this RIC for the top three use cases are as follows. First, when it comes to the video optimizations, about half of the businesses see it as a way to provide the best experience for their customer, and other half see it a way to earn or create a new revenue stream. Second, when it comes to the quality of service-based network services, nearly all business or operators answer is to provide the best customer experience once again. Whereas for the spectrum sharing, dynamic spectrum sharing use cases is mostly about the savings, I mean operating cost, TCO or operating cost savings. Okay, so, so far, I only talk about the bright side of the RIC, but there is challenges for a RIC, AI ML-based open-run. And so, despite all this immense positive potential of AI ML, we must also recognize and address the challenges that lie ahead. So, first, in the area of AI-based run management, the challenges are shown on the right-hand side. First, developing a reliable conflict mitigation function for RIC to reserve overlapping or conflicting requests from different AI machine learning models or accept RFs. Second, challenge is managing the complexity of maintaining inference models, different inference models for thousands of mobile cells in near real time. Third, challenge is creating flexible AI models that can dynamically adapt to different metadata for SLA and KPIs from various operators and vendors. In the area of security, challenges are as follows. First, how to protect against malicious activities that may arise from continuous and seamless updates of models. Second, addressing deployment-specific security challenges attributed to virtualization and software-defined network approaches such as authentication authorization issues. Lastly, in the area of energy efficiency, the challenges are first, ensuring that all end compliance software achieve similar or better results compared to the traditional solutions. So, next slide, 3GPP run working group and AI ML focus. As you know, 3GPP is known as the primary standardization bodies for the 3G, 5G, you know, since 3G and 4G, now 5G and 5G advance and most likely they will continue their job for the next generation, which is a 6G. So, what are they doing in this context? The 3GPP run working group has already identified three pilot projects in this AI machine learning applications. Number one is a general state information. I don't want to talk too much details in the interest of time. So, number two is beam management for the, for example, massive MIMO, you know, beam management. And the lastly, positioning. So, they identified the three items that they will explore and to come up with what would be the, how the AI ML best support these three identified use cases. So, the goal in 3GPP is to evaluate the possibility of AI native layer one, which is a physical layer to enable the deployments and interoperation of AI machine learning based technologies in 3GPP specifications. So, continuing what the 3GPP is pursuing is this. By utilizing AI and data driven configurations, we can achieve end to end optimizations across the entire mobile network. The dynamic parameter adaptations powered by machine learning algorithm allows for real time adjustment to meet the ever changing demands of the wireless environments. The neural network system design can customize and tailor network performance to a specific wireless environment, ensuring optimal performance under various conditions. So overall, with the AI native physical layer or air interface design, we can achieve, we can enable continuous system improvements. In between major, two major 3GPP releases. So this is a very interesting notions. So meaning, once a one certain release is implemented in your system with a strong support from the AI machine learning engine inside, that implementation will further evolve automatically. So without, you know, waiting for the next release come out from the 3GPP. This is a new notion of standardization strategy. So switching a gear a little bit and talk about the chat GPT. And here I picked this use cases because this use case seems to be the most suitable use cases. And when we talk about the potential use cases of chat GPT in mobile network. So it's an intended ribbon closed loop AI network management. So it's a busy slide. Sorry about that. I borrowed it from the Ericsson article. Now let's deep dive into this specific use cases. Which is a network automations. It is once again, intent driven closed loop network automations. So the diagram here illustrates how it works on the left hand side. It includes a process of collecting various types of network status information and comparing with the operators network management policies. If there is any discrepancy. The AI agent identify the problems cause and determine the solution and fix this discrepancy problem by executing a series of comments. So this closed loop system typically operates automatically without human interventions. As shown on the right hand side tasks like network control, adjusting priorities in ongoing network sessions can also be performed using natural languages. Here is the chat GPT come into play. So in the context of intent driven networking GPT or trust GPT can be used as a component within the system, which play as a translator's of the high level intents into the machine readable and machine actionable network configuration common and policies. Let me give you an example how to help in this translation. Let's say so the, let's say one intent description says, it says, this is an example, ensure low latency communication between the finance department and the data center to support real time data analysis. It's like a high level natural language based intent, given to the systems. Then the chat GPT it translated to it read it and translated to a machine readable machine executable statement description. In this case, for example, it can translate to like this create a quality of service policy that prioritize traffic between the IP address range of the finance department and the data center with a target latency below 100 milliseconds. So this translated statement is machine readable machine understandable. Although the chat GPT can provide a more technical translations of intent. It is essential to have a specialized intent driven networking system in place to convert the translated into the machine readable SDN based actual configuration and put command and policies. One more use cases chat GPT chat GPT generated the software for network automations. So let's take this use cases demonstrating how to GPT holding programming capabilities can be used for network automations. You know, chat GPT has a programming capabilities, although, you know, as of today is not perfect, but it's a good enough. And it can greatly assist human engineers in boosting their productivity tasks like automatically generating script for a softwares, as well as AI machine machine especially the machine learning models. This simulation can be achieved using chat GPT's programming capabilities. Furthermore, it could open up future possibilities for developing functionalities such as on the fly testing and even simulation code generations on the fly generations, which is today extremely challenging task. So when we tried in adapt to the chat GPT. So there is a part of possibility, but it's not proven yet. So it's still too early to say any concrete evidence. So that brings me the almost the final slide. It's a chat GPT blessing or curse for mankind. I have to mention this. There have been warnings raised regarding the potential risk associated with the chat GPT technology for human kind. So these concerns highlight the need for careful consideration and responsible deployments of AI, like a chat GPT. And one of the primary concern is the issue of bias and misinformation and also hallucination issues. So chat GPT learns from various amounts of data, including online sources, which can contain toxic or biased, or even inaccurate informations. So if not properly addressed, this issue, it could lead to a perpetuation or amplification of existing bias and spread of false, fake information or false informations. So safeguarding user privacy and ensuring data security also crucial aspect. Finally, by last slide takeaways. So number one takeaway. AI machine learning will be the key technology vector for the true AI native 6G and its intelligent communication ecosystem enabling seamless and immersive integration across all aspects of our connected lives. Second, native AI at all network layers, protocols, computational layers and devices will ensure that AI ML is integrated throughout the entire mobile architectures 5G and 6G and driving efficiency performance, adaptability like never before. Third, a virtual open RAN with Rick will be the essential starting starting point and components towards realizing the AI native visions. Next is a RAN Rick allows for flexible vendor choices and architecture options promoting open and competitive market that encourage innovation and collaborations. And next is the accepts RF ecosystem will provide continuous and rapid implementation. AI driven features innovations for operators. Lastly, while several AI machine learning challenges must be considered and reserved for all and Rick. AI powered network automation intelligence will continue to mature and develop paving the way for a truly artificial intelligence and adaptive future wireless networks such as a 6G. So as we looking forward to the emergence of 6G and the wide spread, you know, adoption of AI ML in our communication system. It is crucial. We continue to evolve, innovate, collaborate and push the boundaries of what's possible. Thank you for your attention. Thank you very much, Dr. Troy. This is excellent. We have many questions as you've been presenting. So we'll try to address some of them. Perhaps I will start with the first question in terms of the state of open data sets to enable this AI machine learning in the current mobile networks. Yeah, I mean, it's a very good question. I mean, historically, there was, what can I say, all the generations solutions known as a song self organizing networks and there they also have the same issues how to access and collect the live data from the live networks. And so you have to negotiate with your business partners. In this case, it's operators and get their grant to be able to access such live data because the network operational data is extremely confidential. But anyhow, how operators willing to, you know, share those data because this is for their operational excellence. The other possibility is you can develop called the simulators will vary realistic simulators and use those simulators to generate artificial data sets, which can be used for the train your models. And that's also possible, because in that case, you can avoid the such kind of restrictions, kind of, you know, sensitive sensitivity of the live network data access. Okay, well, thank you. Related to data sets. There is another question on the privacy aspects. When one personal data is used in the Rick. How, how can we deal with that and ensure the privacy and the protection of people's data. Again, there are many, many tricks. You can all know anonymize the data's and so they are anonymized agents. That's probably most straightforward solutions for them. And of course, and all the, you know, data collection and storing and accessing, we have to comply with the regulations called GDPR. So as far as you comply with the GDPR, you most likely be safe. Very good. Thank you. On the following slide on slide 18, following question on slide 18, in terms of air native air interface design. How would we handle the impact and updates to the UI implementations and what would be the role of standard bodies in the context of self learning between three GBP releases. Yeah, that. Yeah. So, I mean, the idea is a simple. If you develop and integrate, it's called the AI engine, AI engine. And you ingest all these life data's in operations. And then the AI engine. Use these life data's and to further to train the existing, you know, train the models so that it can evolve or enhance the overall system performance and optimizations. So by doing so, I mean, why are you waiting for the next release. It's a two force one is a new new feature. So when it comes to the new features in the new next release. Yes, you have to wait for that and then implemented once that release is available. Honestly, you know, since the today's three GBP release release a 17 or 18. I think it's a feature wise. It's a good enough. I mean, of course, still there's some backlogs, especially for the, for the new features for the vertical industry vertical market but once it's done. Then I don't concern about the, the, you know, the features set off the existing release. So the remaining issue will be the optimization and performance, how to maximize all kinds of efficiencies, like operational efficiency, energy efficiency, spectral efficiency. So when it comes to the efficiency metrics. I mean, this additional continuous training the model can can fulfill those tasks. What I mean by, you know, your AI machine learning engine and models will continuously evolve as long as you continuously feed the live data. Okay, thank you perhaps related to the AI models Marco Carugias on the flexibility. And if there is any guidelines or common understanding on the choice of which models on machine learning approaches based on the application requirements. No, there is no one size fits all solutions, because the good news is that you have so many different machine learning models available to you, including the traditional one like very basic, you know, supporting back come machine as 3M and even the one for Easter and those are the basic machine learnings. And then we have a, you know, more sophisticated one like a deep learning convolutional or nowadays it's an LLM model transformer model. You have a variety of machine learning. So then, you know, it may not be too easy to figure out which model best fit for a certain use cases so I forecast that there will be a tools, which automate that task for you. So before one hyperscaler came up with this tool called auto ML. So auto ML, all you need to do is to provide the training data. That's it. And it generally automatically give you, give you the models for that, for that, you know, based on those training data. So same thing will happen eventually for more advanced machine learning as well. So I'm, you know, I'm not so I know that doesn't mean that I'm really a positive on that. But because there is a demand growing demand for such task, figuring out automatically figuring out the models based on the training data. So I'm so definitely it told those two will come out in the market and be available hopefully as an open source. So you can make it make some modifications and customizations. Okay, maybe one or two final questions, one related to the implications on the size and the number of antennas needed for 6G with AI, especially given the spectrum sharing. I mean, when it comes to, yeah, when it comes to the, you know, next generation, in this case, 6G, you know, it's all about, it always boils down to the spectrum. So which spectrum band, you know, depending on this, which spectrum band you able to use for 6G, you know, all other parameters, dimensions and scale of scalability and efficiency will be will be determined based on the spectrum band. So today, there is a lot of speculation and discussions around the centimeter wave, or I don't know, you know, this might be a right suitable spectrum for 6G, but I don't want to make any speculations because they just began the discussion so we will have to wait and see what this 6G spectrum discussion goes. Okay, final question, is the RIC capable of playing an active role in brownfield deployment of networks or says only for ORAN greenfield networks? That's also good questions. Yes. Initially, we aimed for the greenfield scenarios, but then why not, you know, why not trying to make the ORAN RIC interoperable with the traditional RAN. It could be via the EMS with a certain gateway functionalities, so between the traditional EMS and the ORAN RIC, we add a gateway. So still, this is just an idea. So the details, the debilits are on the details. So there is an interest. I recognize this interest from the brownfield operators. They want to adapt the RIC for not only for the new ORAN RAN deployment, but also for their existing network infrastructure RAN. And also, as I said in the beginning, you know, there is an older generation, previous generation solution called SON. So why don't we just let traditional SON directly talk to RIC. So this is also a possible approach. Fantastic. Dr. Choi, I'd like to thank you very much for the presentation, the excellent presentation and the excellent answers to the questions. And I turn it over back to Alicia. Thank you so much, Bilal, for inviting this session. And thank you, Dr. Choi, for your very interesting presentation. So now we will quickly move for the last minutes of this webinar to the wisdom corner live life lessons. So Dr. Choi at Mason agreed to guide students and young scholars in the field of current ICT research, and also he will share with us some impactful life lessons. So I would start with my first question. So Dr. Choi, which is your hard-earned life lessons that you would like to share with us today? Okay, so, yeah. Every project, every project begins with the customer expectation. And I was usually confident about the success of my projects that I led. You know, I tend to believe my project result could meet the needs of customers and bring the significant impact to the market. However, my lesson, here's my lesson learned. In many cases, the reality was the reality is very different from my expectations. So it was as if I was thrown a stone into the sea expecting a big splashy, but instead only cause a very little ripple. The market response is truly harsh. The customer interest in our product was made in many cases, very temporary. So this experience gave me a very valuable lessons. I realized that the market response is more uncertain than I thought. So you also have, you know, my recommendation, you know, this is my hard lessons. And second lesson I learned throughout my career is I need to continue, you know, I had to continue the communication with the customer, potential customers, even after you launch the products and services. So it's only after you launch the product or services, you are actually better understand the real demand and needs on many and from your customers. So this kind of a feedback after a lot launching is absolutely crucial. And lastly, the technology alone is not enough. You know, you need to understand the whole customer journey and focusing and addressing the real customer problems. Excellent. Very useful. Thank you for sharing these lessons with us. One more question in which field and in which topics you would recommend students to study today. You could perhaps share with us some, some emerging technologies or trends in the ICT field that you find particularly promising for future research, because we believe that your advice could definitely guide students but also the young researchers who are just starting their careers in ICT. Yeah, yeah. I mean the good news is that there are so many different emerging technologies or trend in this area in ICT domain. So, but then if I, if I pick some of it, I strongly recommend this interdisciplinary area, like a convergence technologies relating to the digital transformations. And so here you need the, you know, component technology like AI machine learning, cloud, big data, wireless communication, industry 4.0, robot and drone. You don't need to master all of it. You have, but you have to understand what those technologies capabilities. And then you focus on the certain business case, not just the use cases, business cases in area of digital convergence or digital transformations. So that's, that's my recommendations. Thank you. I'm sure everybody took note of that. So my, my third question, how you spoke about the chat GPT in your presentation so we are wondering how will in your opinion, chat the GPT impact the future of research and let's say everyone's life in general. Okay, so my general level. And it's a usefulness is to Ford. One is, you know, it can give you a very good enough insight for any new domains. Yes, they, their outcome is based on the training data's existing data's, but somehow chat GPT is smart enough to give you some good insight on on any new topics as well, I was very surprised. And, of course, it could be a fake or hallucinate the result of the whole hallucination. But anyway, it's quite inspiring. And yeah, so that's one use usefulness. And even if the answer is not so accurate. I mean good enough answer. It's okay. It's acceptable. So, so otherwise you have to running around and spending or wasting so much precious time to, you know, search for all this, you know, information pieces and trying to put them together. So, yeah, so you can save a lot of time. Once again, check GPT it's a best usefulness is all about the productivity enhancement for you. And second, it could be a very good sparring partner throughout your journeys. Once again, you know, check GPT. For daily life. It's like a AI agent. So we can continue to give you the timely information and inspire you. And in many cases, I mean, it's always good to have some real time on the spot. So that case, it's for daily life, AI agent, check GPT, because it's it's use natural languages and also multi-modal GPT can also use the non text textorial communication like pictures, videos, or even other sensor data based information locations. And so on so on. So that that's the those are the things. May I ask you how do you use it? How do you use chat GPT? Can you, can you give us some example? For instance, have you used it today? Yeah, I'm a paid subscriber. And I use it daily basis. And mostly, yeah, mostly, I'm using it to get some insight and more detailed information about some of the topics that I am not so familiar with yet. All right. Okay. Thanks for sharing that. I have last question from my side. How do you stay up to date with the latest advancements in the research in the ICT field and is there any specific resources, journals, books that you probably have recently or communities that you recommend young researchers to engage with? Yeah. Personally, I, I, I'm a big fan of both chat GPT and YouTube and Wikipedia. Oh, yeah. And the important thing is that, you know, you, you have this mindset. It's like a problem solving mindset. So you always think about what could be the good customer problems you can, you can tackle. So, and then continuously think about how to solve that problem. So, you know, why, you know, if you have a such problem solving mindset and being more curious, and then you always, you know, a desire, have a desire to search for all different types of knowledge, information, insights, and best practice, etc. So, I mean, there are so many information sources, but the most important thing is that you have to have a such desire to explore such informations and knowledge. Thanks. So these are first. Thank you very much. I would like to invite Dr. Oh Noe San, my director, Dr. Bilal Jamusi and Professor Yana Kildiz, if they would like to join me for the closing, and please feel free to take the floor before we close. Thank you also, Alex. Very nice presentation. Thank you. My pleasure. Yeah, Bilal Oh Noe San, if you would like to say a few words, feel free. I think Oh Noe San is on mute. Maybe Oh Noe San, you can unmute. I mute it. Thank you very much, Alex. Okay. You know, you are my mentor. You are my mentor. Long time ago. Yeah, very great. Nice to hear that, Bilal. Yes, thank you very much, Dr. Choi, very good insights also for the young scholars. And very pleased to also see Ian, Professor Ian on the screen today. Good to see you. So you're here. Excellent. So thank you very much, everybody. Thank you very much to our speaker, Dr. Choi. And I would like to say that we look forward to seeing you all online again next week, next Tuesday at 4pm for our third webinar with Dr. Alex Sinclair, CTO of GSMA from UK. So thank you very much, everybody. I'll see you online next week. Thank you. Thank you again. Bye bye.