 All right. Hi everyone again, welcome to opening for summit We're gonna talk about how companies use Loki to run their business and I'm Maria Bradshaw from Red Hat And I'm here with one of our customers Excellent So first I would like to speak a little bit about something that I really like which is all the customers that we have and all the Different use cases of how the technologies that we build here and in other communities come together to solve real-life problems The talk today will be a bit heavily focused on telco because I'm joined by Proxima's so that makes sense But obviously open stock is not just for telcos At Red Hat our open hybrid edge vision extends from the data center to the cloud all the way to the edge and it's still at the heart of Contributing collaborating and being part of open source communities and then distributing that Into enterprise ready software that can be used at many many different Places over the world hundreds of companies rely on that to run their business and We hope to continue to power a lot more innovation going forward As you can see open stack is a critical part of it So the physical layer, but it is part of a broad ecosystem of technologies that come together in addition to an ecosystem of partners That are not pictured here our customers really drive the roadmap that we built and Their use cases and their needs and the next needs of their business Drive the requirements that we bring together to the communities and continue to drive forward on their behalf Some customers are also active participants of the community and we love to see them other customers just through lack of resources or You know maybe haven't been exposed at really collaborating and contributing in open source communities rely on red hat and other vendors to do that we're very happy to have them and To help bring the technology forward It's a little bit of a bragging slide We don't do this often But the in the telcoside the 4g story is been won completely by open stack and that's something that we should be very proud of 4g transom open stack and In 5g core is also heavily powered by open stack as well those networks and the push on those networks and the weight of the Additional traffic that comes to that network is what's going to continue to drive the growth and the scale that we have seen in our partners and our customers Of course, it's not just open stack. We use Kubernetes as well. That's why you see open shift represented there because again This is a red hat slide, but as well as other partners and customers We have some leading operator transformation running multiple large-scale production deployments today and The way that they run that is not just open stack not just Linux not just Kubernetes is a combination of those now the open stack foundation calls Loki I call it a background him because it doesn't always represent that same order of Deployment but rather a mix of them is what makes them kind of come together in different configurations Rehearsal commitments to open stack still continues strong and again our commitment to the distribution that we ship as a product Is there but really that is driven by our full commitment to the community and the contribution that we do upstream So I'm very happy to see actually some contributors here. So congratulations to you And I'm happy now to pass that on to our customers. So Alan, would you tell us some more about your story? Thank you, Maria so indeed I'm gonna share about what we've been doing at Proximis in terms of Open shift and open stack deployment our experience and the different challenge and the type of Deployments that we are running currently So first maybe start with Presenting our company so Proximis we are the main provider of digital services and communication services in Belgium and in fact also in the mainland looks and some international market We provide the typical Telecom services so fix fix network mobile network TV platform ICT services for corporates But we are also a global voice carrier and one of the leader for mobile data services in the world Myself so I'm I'm leading the infrastructure solution architect team at Proximis and previously I was leading the Tech of Cloud team which has been building the whole solution together with the different vendor One thing important to know is that we we've been a full stack team meaning that we were taking care of the full Deployment meaning when I talk about the storage part the server side the open stack and all the shared services on top of it So how did it started in fact a few years ago Proximis launched a number of ambitious Transformation program and there were two interesting transformation program for us the first one was about Deploying what we call an IT private cloud the goal there was to provide a platform where our Developer our application will be deployed much faster than what we used to have previously and then we will embrace everything related to Kubernetes container way of Of Deploying software and then we had a second Program which were more related to the network transformation that was happening at that time the old NFV story where the goal was to Deploy telco services on Commodity server what is often reference as telco clouds with the different telco providers Some figures. So we are running currently a 15 open stack cluster each time we've associated Self cluster. It's around 900 computes three petabytes of self storage 50,000 virtual CPU and Both 4,000 VMs We use typically all the NFV tuning which are available and required for the different workloads We have everywhere cpu pinning isolation PCIe pass through for some use case SROV when we need some extra acceleration Next to the open stack. We also have an important open shift deployment We run a 10 open shift cluster always hosted on open stack and self storage and There we have around 400 worker nodes 10,000 pods 40,000 container and all of these for 89 application and these application are decomposed in around 800 micro services We also add in the program the ambition to virtualize as much as possible Everything we could so meaning that we we took the decision from the start to virtualize all the load balancer The firewalls the DNS the network tapping solution and so on all of these on top of open stack So how do the different clouds looks So the open shift cloud each time we run a dedicated Open shift cluster for IT services where we also in fact Four types of services. We have web services Mobile application which are used by our customer called center application and some middleware services Which are used completely in the ecosystem of of Proximus The main challenge we we we add with this kind of deployment was All the effort required for application refactoring for many of our teams It was the first experience with containers and Kubernetes. So it was about adopting a new Way of working if you look at the stack that we have it you will find back in the next slide Typically the same thing we always have a Cisco is the end on the bottom then we use a number of server commodity mainly HPE server or Dell server on top of it We have an open stack and self deployment and then we have For some cloud you will find the open shift and then a number of shared services At the other side we also have a connection with a corporate cloud where we run most of our workload then on VMware for more traditional workload and some network appliance then you've got the telco cloud deployment which are With the same basis But they are dedicated to network services We have a type of deployment for voice services and another one for data and other type of services The main challenge for this kind of deployment has been that Even if we deploy virtualize it's really still looking Very much where the same as when this workload were deployed on on bare metal So there is a refactoring of this application, but they come most of the time with plenty of NFV tuning requirements If you look we we have the same basis and then the type of services that we deploy are the typical Telco services so the IMS core with all the different NFV voice application server some SBC from oracle CIP firewalls recently the 5g UDM and then we are in the process of deploying some voice added services together with HPE on top of it We have what I explained previously all these short services that we virtualize as well So Palo Alto firewalls, we have the Octavia load balancer But they are in fact part of the open stack deployment We have AV load balancer info blocks DNS services and then Ixia cloud lens Which we are using for tapping for a very specific use case where You know in the telco industry it's always very interesting to have tapping to be able to do some KPI monitoring and so on But of course if you virtualize everything you have some use case where the different VNF are running on the same compute And then we cannot do the tapping the usual way we used to do on the network So we need to do it on a virtualize environment. That's where you we are using these Ixia cloud lens Tapping and then we have a very similar deployment for data TV and public micro services The main difference here is that you you've got also some special storage that we had to fit in our deployment for very specific use case where we had I think a S3 requirement where they could only use one index and the only vendor who could who could provide this was a you away When you see a number of Deployments, so we have the 5g core. It's not completely live because there is still this combined core Which is being deployed at the moment, but we also have email services SD one controllers machine-to-machine solution TV internet streaming. It's a very specific use case for some niche program not for the large one because for this we are using dedicated Deployment on some bare metal for this kind of workload and then of course for some microservices that we are Delivering to our customer. We also use an open shift cluster on this on this open stack It's typically for for two types of workload. We have what we call the TV back-end so everything Things like VOD recording TV guides and so on which are virtualized and on microservices and we also have the Proximus website portal and web services that we run on open shift as well So what have been the challenge and the Opportunity the tuning that we've been doing with our experience The first main challenge was the team skill set to take the responsibility of this type of cloud at the start of all the initiative the Responsibility was shared between different teams. You've got a traditional what we do for other project The network team is busy with the network side the server Team is only looking at the server side and then you have some middle team as well But this made it very difficult to progress in the project time because you need all this integration between the different part so we took very quickly the decision to create a multidisciplinary team where The responsibility we've shared was shared with the different part of the stack We also took the opportunity to work in a DevOps approach where every engineer was Asked to be to go a little bit beyond their circle of comfort So typically the network engineer would also need to know about open stack how it works Basically to be able to do some basic troubleshooting and also help with the integration The same effort was asked for The server guys which also need to be able to understand how the networking integration was working and help with some other problems So it was very interesting for everybody in the team very intense and a lot of learning for everybody that was involved The second Important challenge is what you you certainly heard in other presentation if you upgrade story If it's a telco cloud and you need to upgrade it becomes very quickly a challenge because these application are supposed to run 24 by 7 So some of the things that we've been doing is that we split of a cluster to try to align as much as possible the SLA So on one side what that you saw we have a voice of services Which are having the same kind of SLA and then we have over services which have a more relaxed SLA and What we also have done is that We consider every of these major upgrade as an opportunity to do a site resilience Which is also a typical requirement for telco provider. You need to prove to the regulation So you need to prove to the regulations that you are able to do site Isolation that you are able to run on one site and provide the type of Resilience that is expected from a telco provider It's quite a via but we still have a commitment to do it yearly each of the cloud is Isolated for several days to be able to do any kind of upgrade. It can be an open stack upgrade It can be a major network here organization and what happened is that we have a few days to do the other isolation to a safely degrade the service to make sure that the customer are moved to the second cloud and Then we do our activity and then again a few days to come back with the traffic But of course it requires a strong partnership because when we do this kind of intervention We need all the vendor to be available to be involved to to help us with the preparation And also to be able to react very quickly if anything happened because of course we cannot run on one site for a very long period another main Challenge is about aligning all of these so it's we we have all these different Part of the stack and we need to make sure that they are all compliant and that We don't Go into a trouble. So what we've been doing and I think it's it's very common We use infrastructure as a code approach Everything is deployed as much as possible with orchestration not only The open stack part but also the network part the shared services on top of it and some of the application Are also completely orchestrated to make sure that we have a baseline that we can Effectively reproduce on the different cluster and that we don't have a surprise or mismatch in the configuration What we've been also doing is that we we are Presenting the whole stack as a complete product. We have a version of the telco cloud It's version to where we align the different versions Then we'll move to later on a version 2.1 with the configuration that we will push then from the staging environment The labs and then to a production of the different clusters Something we we also face and where the collaboration with we've had that and other vendor was really key is Initially open shift and Kubernetes are not really forcing for career or workload So there were a number of challenge and some of them are still present for us on Specific workload where we need to make sure we have a flow segregation for example something that you you don't really need in it Workload, but for network we need to to make sure that the management network management flow are well segregated with The application flow and that some of the floor are very well organized in terms of security Something else was that we need is very high throughput Typically, we had a use case where for example, we needed to be able to To to to bring a workload we've hundreds of thousands of connection for a few seconds. So it was for Targeted advertisement where we are able for a football match for example to send a specific Advertisement for each of our customer based on this profile And it means that we only have a few second for which we need to send on open shift Hundreds of thousands of connections and then drill it down when it's when it's done and this was very interesting use case for for head up and for a vendor to to tune the configuration to make some of some additional upgrade of some component to make sure that we could achieve this This is objective Internally also it was a challenge for some of the open shift adoption it was very new for many of our teams and It was surprising they came from time to time with some very exotic functional requirement But also some very strange performance requirements. So it's a lot of discussion over time We also like then to to a vendor on our bike to be able to help us with the discussion with our teams and With the whole story of open shift We face a number of early birds Where the collaboration with we've had that was very interesting and very important things like the integration for the multi interface like multis Some known limitations that we discover together about the tagging the quota for The registry of open shift. Well, I think we have plenty of time to do the question So there are two Mike I think If there is any question about the presentation Well, I want to ask a little bit about when you did this sort of integration with Open shift and open stack or when running open stack Can you tell us a little bit about when did the journey started with open stack at Proxima's and then how has it grown into? different, you know data centers or and how do you project your growth? The open stack story started really with with in fact the IT transformation where they needed a platform to to be able to To start using container Solution and so on so we started the open shift and open stack at that time later on we had NFV story which started as well in the telco industry and of course we already had the perfect solution for that We had open stack and the adoption for open stack went very fast with the different telco vendor We at that time had a lot of discussion with our peers in other countries in Netherlands in Germany to make sure that we We took the right decision and I must say we have been comforted with our decision everywhere We saw it. It's the same type of deployment with open stack That has to be reassuring. I mean we're in Berlin, but you're in Belgium and you know Just meeting with the next countries next to you trying to come up with similar strategies Sometimes trying to approach vendors Kind of together or with joint Requirements actually help drive those to the to the community. So that was really good to see indeed Yeah, any questions from the team or from anyone present? well, thank you very much and Thank you to Alan for joining