 So, my name is Yann Cardon. I'm a software architect in Orange for now around 20 years and I mainly do internal consulting. I'm operating at group level, internal consulting to help the project to use cloud-native technologies, DevOps, etc. And now, the objective of this talk is, I will tell you about what's happening inside the telco, the business models are evolving and we do have new challenges to deal with. I want to show you how exciting technologies can help to break those challenges. So, let's go. We will speak about Edge, obviously, and a little more about opening the ecosystems. So, first, maybe you might know what is Orange. So, Orange is a French telco operator. It operates worldwide also, mainly in Europe, in Africa, and we have a different type of activity. So, what Edge means for a telco operator can mean different things. So, being an international carrier, we have pipes under the oceans, big pipes. We do have access points where we offer some services such as, I don't know, CDN or IP transit, etc. So, this is one kind of activities at the Edge. We do have a lot of point of presence across different countries. Orange is also operating, is a mobile telephony infrastructure. So, that means that we have a lot of antennas and all the infrastructure to to support these telecommunications. And right now, Orange is moving to a virtualized infrastructure with virtualized network functions. So, with really a tree structure with regional clouds, with Edge clouds, and then the far edge, which would be in this case, the antennas as well. There are some infrastructures also. Orange also has customers in different countries. So, for this customer, we offer broadband. So, that means there are some equipment right inside the customer home and that would be the user Edge. So, these equipment that could be set up box, that could be broadband rotors, etc. And we can also consider that whenever we have software inside the smartphone or even inside our clients when they run something belonging to some application, Orange application inside their browser. So, all of these are Edge, in fact. And as we can see, these are very different in nature, very different type of equipment in terms of computing power, in terms of manageability. So, further as you go from the central point, the private clouds or the hyperscalers cloud, manageability becomes a problem. It costs more to operate at the cloud, at the Edge, sorry. Then we also have a second trouble with that is that this is not only a distributed system, it is a geo-distributed system. So, we have to take care about latency. Maybe we can have some different behavior depending on the region where we are deploying. So, I will give you an example coming from our, what we call the core commerce. So, this is the information system, the core commerce. What it does, it helps selling the product, selling the services. So, to sell them, then also we will want to provision them, to activate them. And of course, whenever the client, the customer will use those services, we want the system to, we want to be able to rate this usage. So, this is the example I will give you. So, this is, well, just to explain the schematics, but it's quite easy. You've got a user right now when he's using, let's say, a telephony services. Then it goes to the core network. It's something black box. And the core network produces a core detail record. And this core detail record gets rated inside the core commerce. So, this works, but it's slow, in fact. And what we, you cannot cope with all the traffic. And in fact, usually those systems, we used to do it in batch mode and not on the fly. So, and there is a second problem with that is that you cannot act on the service usage. You cannot act. You can only, well, take notes of what's happening, but you cannot cut the usage on the fly if for one reason you, I don't know, you want to avoid your build customer, build shop for your customers because you are in roaming, et cetera, et cetera. So, then we could, so this is not at the edge, obviously. And we saw and it's been usable for postpaid, which we call postpaid. And there's a first evolution where we will want to move those rating functions directly inside the core networks. So, closer to the user. So, here we've got, well, we do have the same rating engine somewhere, rating function, but here are the troubles coming because, so it does exactly the same difference is that we are managing a balance. And this balance could be a balance of the user is buying a certain amount of data or minutes of conversation or price, whatever. And we will update this balance on the fly. And whenever the balance is at zero, then the customer has to refill it. So, here we see that we can act. We have a reactive loop, in fact. So, this is why first we are doing edge activities is because we want to close the loop to have a short reaction time between the input signal, the business logic, and then we want to act. So, in this case, for instance, to cut the service. So, here comes new trouble with that. We have to take care about state distribution, in fact, because we will have to sync the customer balance, which is on the edge with the customer balance on the core commercial side, because our customer will want to know how much they still have on their balance. And maybe they will want to top up those balance to recharge it. We will have more problems because we will have also to replicate some of the customer inventory, for instance. And more difficult, we will have to transcode the commercial offer definition to a commercial offer definition at the edge inside the core network, because it's not the same software running centrally and not the same software doing the rating engine function. So, there is a trouble here. So, to sum up, I would say there are different reasons we would like to go at the edge. First one is because of latency. I say we have a shortest retrogression loop between sensor, applying logic, and then generate effects. We also maybe would like to make use of to customize what's happening at the edge. Maybe it's not exactly the same behavior. We do have some local context. So, maybe the business logic we want to project at the edge is not the same everywhere. And the last reason would be also for security, because whenever you hear we are speaking about doing some rating, but if we are in the automotive industry, well, if we can hack, if you can hack something, if you can inject some bad behaviors, then it could be less. So, having your business logic as close as possible to the input and the effector, it's something that helps with security also. So, let's deal with what are the different, let's sum up what would be the different challenge we have to deal with operating at the edge. So, free, we talk about it coming from our use case, latency, portability, because we want the same software to run centrally and at the edge also. And we will have also to deal with distributed states. But then we do have other stuff not coming directly from our other, our use cases. We will, we want to operate at the edge. So, we want to deploy, operate, maybe the topography, the topology will evolve. So, it would be nice if our solution could adapt to different topologies. We do have to deal with trust, no trust, tolerance, network tolerance also to deal with network partition, for instance, we might be inside use case where we want to operate multi, that are multi-tenants. And of course, we want to be as efficient as possible, which means less energy, which means better for the planet. So, I will present rapidly, but I think if you were here at the previous call, you know about globally what is WebAssembly. Who knows about WebAssembly here? Almost. I will just retain some of the characteristics, which are really important, I think, at least for our use case, is running inside a sandbox environment. And second is coming from Wazi and the component model is the fact that we can do runtime componentization. We can aggregate components at runtime. So, and those two characteristics, they are really useful if we want to deal with multi-tenancy. I don't have much time, so I'll let you read about containers. Then I will introduce second technology, which is WazemCloud, which could be seen as a, well, the orchestrator for an orchestrator for Wazem payloads. So, who knows about WazemCloud? Okay, I'll spend a little more time on this. What you do with WazemCloud is you have different servers. So, in gray here, you just make sure they can connect each other, maybe directly on the Internet. And then you just throw some, just instance of an executable binary on those servers. And those binary, they are really what's enabling the enabler for WazemCloud. So, inside those binary, you will find the WebAssembly runtime. And you will find also some Nets clients that will help you to communicate so that the nodes and these binary will communicate together. So, once you've done that, what you have is you get a flat topology and you've got ambient connectivity between all these different nodes. So, you have an abstraction which is called a lattice. So, it's based on Nets. And on this abstraction, meaning everything that belongs to a lattice, they can communicate freely. You can do some access. You can enforce the communication and do some access control. But, naturally, it's open and you can communicate freely between the nodes. So, when you think about how difficult it is to have multi-clustering inside Kubernetes, this is really helpful. So, what you get also is having a single control plane for the whole of your lattice. So, you can define precisely where you will bunch or you'll throw your payloads, in fact. And one neat thing is there is a neat abstraction. And Wasm Cloud is really a pure, has really a pure functional approach in the sense of you've got two different abstraction actors in blue here. Actors, they are just functions, in fact. They cannot do anything by their own. They're just reacting to inputs. And you've got some, and you deal with the outside world with capabilities. So, it's the same as if you've seen the previous talks, those capabilities that can be random generators, that can be a database, that can be an HTTP endpoint, etc. So, this is the base. So, for the use case I will show you and the implementation I will show you. We'll use both Wasm for sure and Wasm Cloud to ensure the deployment of the edge on all of these, on the different servers. Okay. If we go back to our distributed system, rating system use case, I just want to introduce some breaking change in the telco industry. It comes from 5G. So, it's not just the same technology plus one because it's better. There are really breaking chains, as I said. And 5G is really about collaboration between different actors. And it's about having composite services. Previously, telco used to own most of the value chain of communication service. So, they are using their own service in their own infrastructure, etc. And it's not the case anymore. We've got, if I continue to the history side, then we had over IP services and communication over IP services coming. And finally, we could say that a telco operator were related to the role of connectivity provider and not communication, just connectivity you've got. And now we do have ambient connectivity, etc. What 5G brings, and this is crucial, is that we do have two different aspects. The first one is QoS. You can choose the quality of your communication on your connectivity. So, that means if you need high requirements, then you can get it. It will use a lot more of energy, of resources. And on the contrary, you could have low requirements. And second thing is ownership. In fact, GSMA is introducing new roles inside whenever it comes to 5G communication. So, all of that to say that we are in a world where we are opening and you cannot do your business just alone. And it's true also for the telco companies. So, I will go directly in our use case. So, we do have some composite service. If we have a look at the technical implementation part of the technical services, there are, these are some services provided by other actors. And if we can think about this relation between actors, there are some contracts existing between all of these actors. So, on one part, you've got, let's say, I take the example of the green one. So, the green one is providing some service to the orange one. So, he's got a commercial offer with terms and conditions. And those are statics. There are really what's inside the contract. And then, you've got a stateful part with states, which is the customer contract by itself. So, in our use case, that maybe, I don't know, the customer balance or some configuration or user preference, et cetera. So, maybe you see me coming on one part static, other part we deal with the state. So, what if we try to implement that idea of having a contract as code, we can do it. We can, with the weather cloud, we can have some, so here we have three different providers, orange, green, blue. We need something common, which would be a common lattice that will just ensure that the different contracts of the different actors are called in sequence. So, the main idea is, whenever you will want to rate the usage of a service, then you can just call this common lattice to know what is the rating for this particular usage. And then the system would go down the line of the different actors and different contracts. And then each of these commercial contracts will activate depending on the conditions, terms and conditions of each contract. So, this is one example. So, if you want to have a look at that, we implemented in this GitHub repo, and it works. So, what, yeah, I will skip that part. And it was, this part is about how to deal with distributed states. So, it's complicated just to know that there are different options with the weather cloud. You can, for instance, address one capability provider, which will hold all the states centrally, but you will lose the benefit of being at DH. So, imagine you have your payloads at DH, and then you have to travel back to go to central states. So, you could also rely on other multi-region-enabled providers, data source providers. It's in hyper-scalers, or you can build yourself. And we decided to build it with event sourcing. That means that we just operate the logic at the application level. And for that, we will use a concordance, which is an event store implementation on top of weather cloud. So, get back to our distributed rating system. So, what can we do with such an ecosystem? So, being able to model the relationship between actors as code can enable to, can unlock a lot of things. We can build something, some ecosystem, which would be really truly open where you can have some new actors coming that will build upon existing services. So, in the cloud industries, we do that every time, but it's something quite new for Telco, really. So, here we applied this pattern to this pluggable behavior pattern to rating functions, but we could also apply it for more complex functions such as service provisioning, service activation. It's more complex because the different actors have different roles. The usage of the ones are the provisioning of the others, etc. So, this same principle can be used also for service provisioning. So, to wrap up, what about our edge challenge? So, in fact, we have different game-changing technologies that will unlock disruptive architectural patterns and that will enable the business. So, our edge distribution, we said efficiency, real-time, WebAssembly is about code portability, trusted code, resource efficiency, wasm cloud distribution, operation and CDH, multi-tenancy, etc. And then, with the functional approach, we can gain deployment flexibility because it's functional. You can place wherever the payload, you can place your function wherever you want. It will give the same results. It will not be as efficient if you place it far away from the effect or from the input, but it will still work. So, this is really powerful. ContractExcol also is really about pluggable behavior and we discussed, we have all the tools with the component model to do that with wasm and even sourcing for distributed states. So, final words. Really, Orange is involved as an operator. All the telco are really deeply involved in an open project, common project, to normalize and open their infrastructure, their platform, to provide their common assets. So, I could quote several. If you're interested in Kubernetes, you've got the Silva initiative, which is about telco cloud. Camara is about sharing, releasing directly to the developer some API that they can use to build upon all really, that's in use to consume telco services to put inside application. Purple, it's about having some, the same software on the, sorry, same software on the setup blocks and on the gateways, fiber gateways, et cetera. So, I would say that open platforms, open standards, open source, they are no longer just a moral preference. They are really a need if we want to maintain business, resilient and sustainable. So, and just if you want to know more, we will be present also and presenting different stuff in Copenhagen on TMF, TM forum. So, Disco, which is an open source implementation of a core commerce using the technologies that what I've shown. And also we're discussing really how WebAssembly and Kubernetes can be bound together to build the application. Thank you. Again, thank you so much for this exciting talk. We maybe have time for one question if anyone has one. Let's see if it's a good one. Make it hard for you to give to me. A quick comment and then a question. The comment is, I'm from Ericsson so I think I read more into your slides than probably most of the people in the room. When you're talking about building this new kind of model, it is really about basic app providers having the ability to create these slices so that they can get a particular quality of service or things from the network that nobody's been able to get before. Examples of 8K video coming in for telemedicine where you're trying to do something remotely or augmented reality as we're getting more into that. For a surgeon. Exactly. So, just to give more applicability to the rest of the room, my question is, as you look at WASM and WASI today, what's missing? What's the next thing? What's the most important thing for this platform to do for you and for the TELCO industry? Okay, I would say that right now when we are speaking about an FV virtualized function, we are bound to Kubernetes because, well, it's the logical choice. So, we need a little time, I think, that would be the first quick answer for that. But we can think about having real, we see that the ecosystem within WASM Cloud and Kubernetes is really evolving. We see that we could try to implement those NFVs with EBPF, but also when it comes to higher level, we could build these higher level functions that will not fit inside EBPF. We can build it with a WASM Cloud. So, I think we just need time and maturity and we know that right now it's a lot of work to transform the network appliance inside being cloud native and externalize it, et cetera. So, I see that you agree. Maybe one more and then we got to get going. Thank you. Question about security and trust? Yeah. When you are pushing the function and the computing to the edge, are you taking as a mandatory condition that you are working on a trusted zone? Yeah, okay. This is a good question. I didn't have the time to go through. So, what is WASM Cloud ensure you that you have secure communications between your WASM Cloud hosts as long as you can trust these binaries, these hosts? So, when it comes to trust, there is the whole chain coming from starting from the hardware and you can build and in order to have this to function, we have to start from the hardware with TPM, et cetera. So, it's another field and it will hopefully be demonstrated also in Copenhagen, but we are working on attestation framework in order to to show that the binary has not been altered. So, and then it starts from here and you have the whole chain. Thank you for the question. Great questions. Please join me in thanking Yen for his wonderful talk. Yen, thank you so much.