 Thank you for joining us today. And I am very excited and pleased to welcome two guests to this presentation and conversation today. And I'd like to turn to both Raul and Carlo and have you introduce yourselves and then we'll go ahead and do a brief presentation. Sure. And hi everyone. My name is Carlo Pizeno. I'm a Core Network Planning Director for American Mobile. I have been in the company for 15 years now and my main responsibilities are related to the adoption of new technologies for the network. I have been leading the NFP and SDN processes recently for the group. And now we are very focused on bringing 5G and all the cloud native principles around it. So it's a pleasure to be here. Thanks for the invitation. Fantastic. And Raul. Hello. My name is Raul Reyes. I am in charge of IT infrastructure and cloud services optimizations. I have been in American Mobile for five years now. And my main focus has been to enable and empower the different and distributed teams in Latin America. So we can be always evolving and always getting more of the innovation in the operation. Fantastic. And I want to thank you both. I mean, the partnership we've had with American Mobile has been nothing short of spectacular. We've been able to do some very exciting things. And over the years, it's culminated in what we're about to talk about today. So I'm very excited to present this. Let me go ahead and share my screen. Yeah, let's see. And there we go. And so what I'd like to talk about today is the Enterprise Neurosystem Framework. And this is something we've all been talking about for quite some time. And to set the stage, I was in a conversation with Raul at a beautiful restaurant in Mexico City called Loma Linda. And he turned to me over lunch and I told the story before. So I'm sorry to repeat it, but it was funny. He turned to me and he said, what is Red Hat doing with mobile networks and artificial intelligence? And at the time, I said absolutely nothing because it was still very early days. We were still kind of in that assessment mode to basically understand what the impact could be. And given the rigorous uptime requirements of mobile networks, we were just kind of putting our feet in the water a little bit. And Raul really pushed us right into the water with that comment because I came back home. I reached out to a number of folks, including Chris Wright, our CTO and some other people. And we started a small focus group to take a look at what this could eventually become as a community directive. And so we've been working on a long time together and I'm very excited to discuss it today. So here we go. One of the core things we've thought about over the years is just that human and IT architecture share a number of strong similarities. And we just noticed this more and more, especially with the advent of artificial intelligence, which really is kind of the completion of this parallel model when you think about it. And you've got all these mobile devices. They could be considered almost nerve endings. They have the capability of hearing and sound and visual identification. And then data centers really equal the brain's functions in a lot of ways like the cerebellum and memory and processing CPUs. And so what's interesting is there really is a kind of a parallel model. We as a species have created something that's very similar in many respects. And so in terms of the human body, the more core operations are fully autonomous like the heartbeat, chemical levels, the way we assimilate energy. And it's still partitions conscious thought processes as part of that too. So it's really almost like two separate sets of functions from that perspective. But the higher order or the core decisions are made by the conscious mind, which is really kind of firewalled away and coexist with these other systems in a real sense of harmony, but also developed and honed by evolution over many, many years to say the least. So we think corporations are kind of similar in many ways. And there are many different functions in a corporation and it can span many different countries. And so we thought over time that it would be interesting to tie together all these different data points and all these different functions essentially as a single instance and to make it all part of a single framework. And that's where we have ended up today, which is a new AI and machine learning telco community right now, which is called the enterprise neural system. And this is about AI infrastructure basically connected to every single business function across the enterprise. And we're definitely starting with telco from that perspective, but it will be applicable to all verticals because every corporation in the Fortune 500 is facing the same challenge. And founding partners include America Mobile, but also Verizon Media, Equinix, Ericsson Cove, Lambda Perceptor Labs, Ericsson Young, Seagate and Watson's also involved. And really why it's needed is that AI models are being built and deployed in both kind of a do it yourself fashion and through different vendors. But without really a comprehensive integration framework or any kind of large scale federation at the moment, there are lots of small kind of point solutions and AI models being scattered around the enterprise and connected to data lakes, et cetera. But in terms of taking all those elements and all that information and cross-correlating it for larger scale insight and deeper insight, that's something that we saw the need for and why we're starting this community. So it basically unifies and optimizes an entire multinational corporation at that scale with a single AI and ML framework. And it enables, like I said before, the overarching cross-correlation of all these different data points. But then what's interesting is over time, Edge and core AI, all those instances become part of one system and it just provides any form of management, whether it's mid-tier management or the C-suite with a real-time view of all operations. And we thought about a lot of creative applications for that, like a hologram advisor or a robotic advisor, like down the road, but of course, it would just be on screen for the time being. But we're looking to the future to do some really cool and kind of fun innovative stuff. And so conceptually, if you take a look, we've got all the core open-source components like Linux and self-storage and Kubernetes, et cetera. And then we have the open data hub framework, which allows you to use open-source AI platform tooling to create models and get them into production and maintain them. And then also that would then lead into the AI neural system. And so you would connect the neural system to IT and then it would basically propagate from there and connect to all these different areas like the finance area, network operations, facilities management, legal and regulatory frameworks, human resources, I mean, go down the list, all these different areas would then be cross-connected and integrated together to feedback all this data into the system. And here's kind of a low-level architecture example. And again, just an example, you would have AI and ML instances in all these different areas of operation, network operations, IT and then the knock itself. And what would then happen is quite literally they would then be connected to yet another kind of smaller and more, I guess you could say, streamlined group of AI and ML instances and they could be GANs, they could be all sorts of different AI frameworks that would take the lower-level findings and begin to create a tree of logic basically or a tree of perception that would then take all that information, begin to filter it and begin to draw out these kind of correlations that can lead to deeper insight. And so over time, you would have the same framework in every different business instance and it would then go up into let's say a second or third or fourth tier of different GANs or different AI frameworks into transformer frameworks or other AI frameworks, because we'll be using and borrowing from a lot of different areas to create this and ultimately into the recommendation engine that would then basically convey the results and the observations and the insights to management and the C-suite. And this would involve a federated intelligence model. So you'd be taking all the different AI models, cross-correlating all their data, creating a reporting intelligence that would basically then turn to management, as I said before, and relay all this information. And again, we would start with perhaps a dashboard on the left just as an example, then on screen maybe some form of human representation and then eventually a hologram or some other form of intelligence that would convey this to basically their colleagues on the human side. And so what's interesting about this too is we have found in actually MIT had discovered this as well, is that the combination of human and machine is actually three X more powerful than either one alone. So machines will have a certain error rate, humans will have a certain error rate, but together they actually reduce the error rate to almost less than a percentage. And so in many use cases that we've examined. And so really what we're seeing is this kind of merging of the abilities of both sides of that coin into something that's actually greater and more powerful. And so in terms of work streams, we're looking at different areas. We'll have a series of, excuse me, open models that we'll offer. We'll work on an open data platform and a middleware solution basically to cross-connect all of this from an open source perspective. We'll be looking at it through the lens of open AI ops or AI operations. And this really could be considered kind of the marriage of business intelligence, the classic way of taking a look at different data around the enterprise and drawing meaning out of it, but also AI ops and the autonomous operation of the enterprise itself and how you can basically take all this together and understand it. And that would be under the umbrella of the federated intelligence section which is right there, number four. So the way we look at this is there are really larger implications for global AI development. And this would be kind of where we've seen those tea leaves begin to gather together in the middle. And what we've noticed is that all these different elements do need to be brought together, integrated and correlated. And so there's really a lot of benefit for the enterprise and it's all the obvious things, but through a real, the widest possible frame of insight and being able to take in every single data point and understand what this all looks like. It leads to cost savings, streamlined operations and really it allows us to build a community source solution which is based on real production experience from folks like Raul and Carlo and a tailored list of objectives that we can all adhere to. And then the good news is a lot of existing open source offerings and frameworks can be applied today. There will be a few things that need to be created but in essence all the groundwork has already been laid by open source communities in terms of the tooling we can use. And ultimately there are cross vertical applications in financial services or oil and gas and all these different industries can take this kind of a framework and apply it to their own operations. So it's actually a very exciting time for us and we're just getting it off the ground and we've already had meetings and I've got things moving. So I'd like to now turn basically to Carlo and Raul and I'd like to ask you a few questions along these lines as well. I think what's interesting is the fact that America Mobile got involved in this so early on is really exciting and the fact that you've basically not only kickstarted us in this direction but you're also really embracing the open source I guess you could say methodology and way of doing things we think is wonderful. So I think really maybe you can talk a little bit about the value of collaborating in the open with your peers like Verizon Media, Equinix and others. I'd love to hear about like really what convinced you to do so and to move in that direction. Yes, okay. Well, from a telco perspective we started some of the transformation projects in America Mobile some years ago adopting I would say a semi-open approach but I believe we reached a point in which we discovered that we were not flexible enough. So now I believe that the open source world has matured a lot and we're convinced that now with the industry trends around 5G becoming a reality, I believe that it's the right moment to show that adopting this logic and contributing back to the open source communities is the right way to unlock innovation or future networks. I totally agree. I totally agree for the introduction. And as Carlo mentioned, we think that the open source projects are now the de facto option in order to solve big challenges. So today we see more and more and more challenges coming our way and it will be impossible for us as a single group to tackle all this constant change at the pace that we are seeing today. So we are doing this because we think that the future of open source is promising. And from the community, the open source shapes the technological evolution and the creation of an environment that leads to constant innovation. We think that if we do not do this this way it will be impossible for us in the future. No, it's really exciting and you guys have been wonderful partners in that regard. And I think it's been wonderful to see the industry support. But what about the technical value? I mean, what are the advantages of creating this kind of a multinational AI instance to manage and study your global operations in real time and to help you manage them? What do you find to be the value from that perspective? Well, usually I think operators like us face very complex maintenance processes. So one of the goals we have is around the processes optimization with the ability to take autonomous decisions considering dynamic conditions. So in general, adopting an artificial intelligence and machine learning logic will give us the advantage to reduce operational costs and at the same time reduce the failures in the network by having this predictive logic. And as we have operations across most of the Latin American region, we will also have the advantage to learn how to apply this methodology in similar scenarios in all of our outcomes with this multinational instance and common knowledge between all of the countries. Fantastic. Yeah. Sorry. We believe that having the technology that focuses on predicting and managing the behavior of our operations will allow us to forecast more effectively. And also, hopefully, we will plan the work assigned in our nodes, hopefully before every error or mistake occurs. So before they happen, so machine learning will help us to learn faster as well. We think that this kind of technology will develop better solutions as well. It will help us to leverage and bring better solutions to our customers. So we can maintain the stronger platforms in the times that not only MVPs are needed because always the business is pushing to get more solutions as well. But we need not only the MVPs, but also we need to have reliable and to have reliability at the same speed of the business. So totally agreed. And I think that's one of the core values of doing something like this. And I think what you're really leading into is something that was mentioned earlier was just the combination of human and machine elements to basically create something more powerful from a cognition perspective. Do you feel the same way about that? And do you think it's going to be that kind of an outcome? I mean, in terms of your own opinions? Just, well, I think that this is a process that in general requires maturity. So I expect that initially human knowledge and integration may be needed. But the systems learn to recognize situations and correlate them with the solutions and data prepared by experts. So in the way that we feed these events into the solution, the solution will know what to do and avoid risky situations for upcoming events. At the end, of course, we expect that the solution would have enough capabilities and intelligence to take decisions on its own. And Raul, your thoughts? Well, it's interesting how AI and human cognition are now collaborating in many ways, as you mentioned before. I think that in one side, the humans train and explain the machine learning models. And also, they maintain new and create new models. On the other hand, I think the AI brings more data and better insights. So in a way, AI boosts our human potential. I think that we can create opportunities for engaging technology in a whole different way. Definitely so. And to do something like this, do you see advantages to do this kind of development in really an open-source community manner as opposed to more of a proprietary or in-house approach? Like what would be the benefits as well in your opinions? Yes, well, as mentioned earlier, we started the following proprietary approaches with some of these transformation activities within the telco environment. However, we have seen that these proprietary solutions won't solve at all what has been promised in the industry. So the first thing that we expect is to have technologies and processes solving that promise. Then we expect to have cost reduction in our processes. And finally, we believe that contributing back to the open-source communities give us the opportunity to enhance the solutions and make them better all the time. Yeah, totally. We think that using a proprietary approach will never give us the openness and the flexibility we need in order to build the effective solution. So open-source communities from our perspective create more competition. And when there is more competition, the price is reduced as well. So the massive acceptance of a successful open-source project is very powerful. So we need to provide a neutral home for it. We need to protect it. We need to develop on top of it without any other risk of this openness or flexibility that we are looking for. Fantastic. Thank you. And because this is a Red Hat event, I'd be remiss if I didn't mention the fact that enterprise-grade container platforms could be very useful in that regard. And how do you feel platforms like OpenShift and others can contribute effectively to this kind of environment? Well, I think that OpenShift is one of the most mature solutions to enable a cloud-native environment. And we have very high expectations around it to provide all the flexibility necessary for 5G and other future network environments. Also, OpenShift provides very good DevOps tools with smart lifecycle management of containers through cover-network orchestration, which give us the advantage to accelerate the development around the new trends, for instance, like network slicing and enabling, for instance, solutions at the edge of the network. So, definitely, OpenShift is very valuable for us. For sure. Today, OpenShift allows us to take the containers and put them in the right place. It allows us to manage them, shoot them down if we see any problem. We are building today microservices and moving workloads from different clouds. But I think that there is so much to be done, that we have not harness the full potential of these type of environments. So, container platforms provide an easy and repeatable and portable environment and deployment on a diverse, very diverse infrastructure. So, I think that we can contribute in a very important way so we can deploy smart and more connected and automated platforms for the network and for some other environment. Fantastic. Well, thank you for the endorsement. I really appreciate that. But more than that, thank you for your partnership and your leadership in this area with us. I mean, it's been really exciting to work with both of you and America Mobile on this initiative and with our partners. And, wow, I'm just very grateful. And just want to thank you both for your time today. And I think we'll leave it at that. But again, thank you very much. Thank you very much, Bill. Thank you very much.