 Yes, raise your hand if you have a free seat next to you so that people are walking around Keep it up. It's only an hour. Actually the room runs until seven o'clock So if you could keep your hand up until seven o'clock, that'd be great. Oh, sorry. It's 12 o'clock. Maxim take it away It should be on right Can you hear me right in the back? We're good Good. All right. So this talk is about multicloud CI CD with open stack in Kubernetes So if you're here for something else, you know, I Don't know. So I'll start by Presenting myself. I'm Maxim. I'm a cloud consultant. I help companies with their public and private cloud projects Usually it's like a combination of open stack Kubernetes Seth some CI CD And kind of mix it all together to accelerate development processes at different companies and allow them to Yeah, use the DevOps principles and so on. I'm also a contributor to all those projects Get lab helm and a few others. So, you know, that's I think enough about me. You're not here to learn about that I'll get going with an introduction So the the talk is about multi clouds CI CD open stack in Kubernetes I'll go through those terms and see kind of explain what I mean there and why this might be of interest to you So first the multicloud aspect so multicloud there. I mean the fact of running across several clouds Kind of self explanatory, but That's kind of what I mean with it and other people might mean other things Why do you want to do that? Or why might you want to do that? You have you could have lots of motivations there? Usually the the most common motivation for multicloud is around Residency or across several providers So you don't want to put all your infrastructure eggs into the one basket. You want to spread the risk across several providers say provider one and provider two and if provider one has a huge global outage you can Still run your workloads on provider two without any impact for your customers So that's like one of the motivations for for multicloud is around residency and mitigating global outages of car providers sometimes we see them in the news And it's nothing fun to have your applications go down because of somebody else's outage There is also like a vendor lock in aspect if you're running with a single car provider You kind of locked within their ecosystem and that might be a WS So that could be like an open stack based car provider. You're still locked in with that provider and that's a risk if maybe tomorrow they decide to do something you You don't like maybe they acquire a competitor of yours or they start to increase prices It's not really good position to be in to have like a single vendor to be locked in there So having several cloud providers is the way to mitigate this and Reduce the vendor lock in you could scale down your infrastructure in that one cloud provider and then scale out into the other ones to Switch over to another one There's also a cost motivation if You have one cloud provider that has really good prices on one type of infrastructure Maybe it has really good prices on VMs and another cloud provider has really good prices on storage or GPU instances you could take advantage of The best in class or the best providers for those workloads and run some workloads in the we run that has a good GPU Instances and another one with some other workloads into the other provider that has maybe Good Storage services This is also lots of people want to do hybrid cloud strategy where they have a private cloud deployment on on prem Usually open stack that could be some other cloud platforms and They want to run the baseline load on prem and do cloud bursting into public cloud providers to handle spikes around Maybe if your new commerce website around you have like a sales period on Christmas or something like that You could handle your spikes there in the public cloud and use that only when you need it and that could be Reduction thing for your infrastructure Multi-cloud is also around Features some cloud providers don't have GPU instances at all some cloud providers Have some features and some have other features So you might be required to like combine several cloud providers just because you need those two features And there's not one provider that has all of them So that that could be a thing as well and kind of in that direction you have the locations Maybe your application is a very latency sensitive and you want to to be really close to your customers maybe some live streaming thing or Some gaming platform and you want to be really close to your end users and There's not a single provider that has all the locations in the world So you're gonna need to like combine different infrastructure and Make like a multi-cloud strategy there. So those are like the motivations around the multi-cloud Now, let's go to the CI CD aspect. So CI CD is continuous integration continuous delivery So to test the code continuously and then continuously deploy it production as well That's really what I mean with CI CD The the motivation there is to to allow your developers to fail fast You want to go really quickly between the time when you have an idea and the time you Can realize that idea and potentially fail because you don't want to invest like years of development to realize Oh, this is a bad idea. It doesn't work out. It's better if you can automate the whole process and realize that early on right so CI CD you will need a lot of automation We don't want to To rely on humans doing things. We want to automate as much as possible For instance, we don't want to involve DevOps team or you my ops people to deploy this new version of the software This should be fully automated so we can reduce the cost of failures for instance and Accelerate the time to to go into production This is really important when you're doing multicloud because When you start to have more than one location if you do things manually It's really difficult to keep the consistency across those locations and you can really easily end up in situations where you make a typo or I mean we're all humans we make mistakes and that happens and And you don't realize it right away and you end up in some bugs that are happening only in that one Location don't happen in the other ones. It's difficult to track down what's going on and it's not fun for your customers so really important to automate this whole infrastructure CI CD thing And then we have Open stack in communities here So what is open stack because you might not know about a bunch back. It's like an API driven infrastructure platform I see some people taking photos of the slides. There will be a link to the To the slides at the end so you can just wait for that if you can Download them later on so back to open stack. It's an API driven open infrastructure platform So basically it's like an API and you can boot up the ends into different locations They are around 60 cloud providers Around the world that provide commercial services there so you can go on their website you register and they are gonna allow you to boot up the VM on their infrastructure For a few of course and if you want to install open stack on-prem, that's also an option so just to to say that there's like an ecosystem around the consign that's pretty large and Yeah Then Kubernetes so communities is a container ecosystem Very developer centric, so that's because developers can run docker on their Computer and then they can package the application in the same way on the computer on their laptop as what we were running production That's like the portability of using containers makes trouble shooting much easier and you don't end up in situation where You have differences between your development environment your staging environment new production environments there since you use containers you Have this portability and makes things more consistent across your environment Alright, so that's like the definition of the terms then this is a diagram of like overview of how this kind of fits together We have on the left-hand side the what I call the data plane So you will have your users they will talk to your application and this is where you have your business logic This is like your product nothing you develop whatever it is Your application will run on Kubernetes with some container There and in Kubernetes will run on open stack kind of to clarify how like the big pieces interact with each other And then on the right-hand side, we have what I call the control plane and this is where your DevOps People or your developers or your operations people will will work they will work towards get lab in this instance The CI CD platform and this will Talk to the different components and and make the magic happen at the right time at the right place so Your developers can push a new version to your CI CD platform, and then it will make it happen in open stack Kubernetes and application Wherever is needed. So this is kind of the idea container platform infrastructure service and business logic All right That's like the general thing Now I'll go into the more technical aspects of it. So the architecture and how we can go from the web browser to the global multi-client architecture so it's maybe obvious to some people that Your clients they will they will enter your your application from like some form of DNS name that's like the entry point that they have and They will type that in a web browser and then DNS resolution will happen and hopefully at one point they will reach application but Since we want to do multi-cloud Architecture we're gonna need some form of global load balancing So this is really important and this needs to happen really early on So you have kind of different options there you could use CDN and you might be already using a CDN for your application for caching or for accelerating things so you could leverage features in your CDN provider to Go balance things globally. So some CDN providers are like you can set policies and routing rules and such So this is like one option Another option that you have is to use something at the DNS level since the DNS is like the entry point here Kind of three main approaches you could use some form of like geo routing DNS There is a route 53 in DIN that have it as commercial services and on the open source front You can do things in bind you can do things with a geo DNS you have a lot of products that can help you But we'll need a bit of hands-on there to get that going so geo DNS is The idea is to route the query to the closest server Or plus a cloud to the user, right? And that works for most applications and some application required Another option something you could do is like some DIY dynamic DNS updates So maybe you have like a background process that updates your DNS record on the fly based on Whatever custom metrics you have that makes sense for your application maybe you want to Do this based on load or something like this, so this could be a simple crumb job or something more complicated it's kind of up to you and kind of the simplest option is to do DNS round robin and There you just set several a records for one DNS name a one for each top provider, for instance The thing to know about DNS round robin is that you have very little control over how your clients will interpret this so you don't you can't really Control how did this will be the balance is up to the client to Handle the loop dancing so that that doesn't work for all applications, but that's a solution that work for some and that might work for you It's very simple All right, so we have our application It goes through our global dancer and then we want to the request goes through the global dancer And then we wanted to hit one of our application containers, right? That's the next step in the chain. So in this example will have like three example clouds One on the left one in the middle on the right and We'll have our application hit one hour containers but Something we need to clarify is that from this to work out your application needs to have certain properties This will not work with any kind of application needs to be some form of 12 factor application If you know the 12 factor principles The application needs to be localized, right? We want to run this on Kubernetes Kubernetes uses containers So if our application is not localized We can't run it on Kubernetes Foremost languages. This is like a solved problem. You have Examples for like all the main languages on how to decorate applications. So that should be fine Your application here. I'm gonna assume it's HTTP based some form of a web app with like back in the PI something like that If you're doing something else, you know, it might work might not it's gonna be more complicated And your application needs to to be distributed in in some shape or form In there, I mean the application is to be able to run into an active active mode. So Several active instances working together That's really important. So This doesn't work for all types of application, but most I think right, so We have our application layer and our application layer runs on top of Kubernetes Kubernetes is gonna be our cloud abstraction, which will give us an API to the Kubernetes API Which will abstract all the constructs from the underlying hardware and the underlying cloud providers Each cloud provider is a unique here. We have one layer that will give us One API that works across the top right is being open-style AWS, whatever It's really really important to do One cluster for location, you know, there are no circumstances. You should do one cluster across several data centers This is a really bad idea in terms of failure domain You can lose Chrome very easily. This is just a source of problems. So one cluster for location Even if it's a small location, but just one server, you know, Kubernetes can run on a single server. That's fine In terms of Kubernetes features, we will use ingress controllers since it's an HTTP based application That's going to be our reverse proxy layer. And then we'll use some form of federation more on federation so Federation can mean several things based on what's your background and I'll cover two and a half of those So federation if you come like from an open stack background or IDP background You're thinking mostly about authentication federation kind of like single sign-on experience where you can use one set of credentials or one token to access several platforms So this is possible in Kubernetes and very much recommended as soon as you have several clusters even if it's just test and probe and Most common way to do this is open ID and webhooks that are supported by the community's API server The webhook way is very simple Every time the API server will see a token that it doesn't know about it will call that webhook that you configure and that webhook is responsible to Identify if this token is valid and if it's valid who is it about and which group this user or service account is So that's really useful if you have like in-house IDP or authentication solution and you want to Integrate that in there somehow The simplest way is really the open ID connects support in the Kubernetes You just have to flag that you need to set you know your API server. That's OIDC sure URL OIDC client ID and You are really good to go on the server side You have lots of open ID connect providers That are available. You can use Google you can use GitHub you can get lab and there are plenty of online services And if you're into the self hosted thing you can use get lab on-prem Keep look index. So you're probably already using one of those thing and it's really nice to be able to just Drop it in and we use your existing authentication platform on the client side. It's a little bit More tricky since the clients they will need to provide the token to identify himself and most people not really They don't really know how to do that. So you could do DIY and there's like a Qubectl command to to set that in your Client this off provider OIDC and then you need to pass it the refresh token your ID token in a bunch of styles or you can use like a web interface So I call it Kuberos that will be more bit later that generates your Qubectl Cfg config file and you just drop it in and that's it So this automates the process for your DevOps or your Kubernetes API consumers. So this is really interesting you have another Kubernetes Federation is Cube fed that was called before uber net is and this is a completely different concept. There is One API to rule them all basically is the idea You have one Kubernetes API and that Kubernetes API talks to a bunch of different clusters and make things happen This is really cool idea, but It started out as a Qubectl v1 which was discontinued last year and was supposed to be replaced with Qubectl v2 Which is work in progress So as operators by kind of left in this in between situation where we don't really know what to do Should we go with a Qubectl v1 which works only with kind of old version of Kubernetes and it's not really supported Or should we go with a in development version Qubectl v2 cross our fingers so this is a bit of a tricky situation and Yeah, it's how Things are and I'm sure contribution is welcome in this Q fed project But in the meanwhile, I'm gonna Choose to do some DIY Magic in the in git lab to handle the several clusters and then later on when Q fed v2 becomes Ready, then we can backport that and fixing later. Hopefully So that's that for Kubernetes Federation And then we have open stack at the bottom Well opens like run some hardware, but you know, we are talking clouds. So we are upside that In open stack terms, we are gonna need some Nova instances. So those are basically DM Some security security groups and that's basically firewall rules that are applied on each virtual port in the LibDir and Then we're gonna need a set of key pairs for handling SSH access in a certain manner and server groups Server groups is somewhat optional, but if you're really serious about this, it's not optional So server groups is like a mechanism where you tell these group of servers I don't want them to run on the same high providers and This is not something you think about maybe from day one But once you have a high provider going down and you have all your etc. Do not dare then you notice so Think about that in advance if you're deploying Kubernetes on any platform really actually You should be careful about those things On the new trim side So that's the networking project in OpenStack We're just gonna use a network a subnet a router and a set of floating IPs So floating IPs is basically public IPs that are Matted in So we are using really basic constructs in OpenStack. We're not using any of the fancy features Keeping things simple. This is the key features in OpenStack the most mature features. So this is what we use there Alright, so That's that for the like the architecture That's a lot of like things a lot of pieces We're gonna need some tooling to make that happen in an automated fashion, right? I'm gonna present a set of tools. Those are the Tools I think are most popular I'm not like endorsing one in particular if you have another tool that you like most, you know cool That's fine You know some people are very opinionated about tools and such so What works for you? That's great I'll start with the Splitting this in kind of two section to big section. There's the infratools. So those are the tools that we will use to To manage the VMs basically, we're gonna need to populate some VMs and we want some tooling with that We don't want somebody sitting in the web UI or in the CLI and like OpenStack create VM or whatever so I'll start with a OpenStack native tool, which is called heat. That's a like OpenStack project that Handling automation you pass it a file that describes how you want your infrastructure to look and it's gonna make sure that those VMs and Networks and floating IPs become available to you This is this works very well the kind of two downsides there is that It's a small ish ecosystem. So it's difficult to find examples on GitHub or wherever is it you look so you need to do a lot of DIY and a lot of like You need to know about heat. You need to have expertise in heat And it's open stack only right you can't use the open stack heat in Google cloud So you're gonna need to use another tool anyway for that So kind of here what I'm saying is if you know heat if you're using heat if you're open stack only That's that's probably a good choice But if you want to do multi cloud across different providers different software provider like open stack and cloud stack and Google cloud You're probably out of luck You have unseable as another option and there I'm talking specifically about the unseable cloud modules So unseable is really big. I'm talking only about the cloud modules. So those are like the OS and the score server modules that Allow you to talk to the communities API into like a general unseable module thing and This has isn't a lot more mature has support for EWS Google cloud open stack VMware a bunch more platforms And you probably already know unseable and you probably already use it in some places your company So that could be a nice choice, you know based on your team and what they know last tool is terraform and terraform is a Tool that is designed for infrastructure management. We are unseable those lots of stuff and inframagement is one of those things terraform those inframagements only so a little bit of a different approach there and That is visible in the features that you have in terraform. For instance, you can When you run terraform it asks you this is what I'm about to do. Are you really sure this is, you know What you want to do so you can review that and unseable. That's not so easy to make this kind of prompts and plans and such and since the terraform is focused on Inframagement has a huge amount of platform that are supported. So if you're into like some exotic platforms, this is probably a good way All right, so now that we have our Let's choose a tool for the inframagement and we'll want to install Kubernetes on that, right? so if we I will start again with the open stack native tool, which is Macdom, which is an open stack project and Has kind of the same downsize as heat. So if you're open stack only if you know Open stack really well. This is like a good way to go. It uses heat in the background to do stuff Works really well. It's kind of a small ecosystem. Last I checked there was no unseable cloud module for it. So You need to double check those things, I think You have some other tools that are available to install Kubernetes You have cops for instance, and this one is not open stack at all. It's only AWS. So, you know, you could use different tools. Maybe you use Magnum for your open stack stuff and cops for your AWS stuff Same thing with Rancher. Rancher doesn't really have like native open stack support But has support for AWS, Google Cloud, VMware and a few other platforms. So there you could also combine Rancher with Magnum if you want and And lastly, CubeSpray is also like a popular deployment tool for Kubernetes and it supports Open stack, AWS, Azure, Vermetal, vSphere Really, it doesn't care where your VMs are, where your operating system is. It's just it's a set of unseable playbooks It just needs an IP address with S and H open and that's all what it cares about So this is really the only way if you want to do one of the most common way if you want to do Kubernetes and Vermetal functions and The nice thing with CubeSpray is that it comes built in with a set of terraform recipes or plans that you can directly use for AWS and Open stack. So if you don't know terraform, you could just use those and Get going that way. So that's that for the tools. No, one the CI tools So we're going to need some CI tools. I talked about GateLab before but if you use Jenkins or whatever, that's fine with me as well I'm just like very familiar with GateLab and I Like that the integration with the Git repo directly in the merge requests. It's all in one solution I think it's kind of neat for the CI part But you know if you want to use Zool or Jenkins or some commercial solution that's fine as well In terms of your like application pipeline, you're gonna want to do something like run some checks make sure that this commits Makes any sense. Is this like something worth compiling at all? Then we're gonna build this or docker build which will probably compile the code and make Container image Then we're gonna run some tests. We want to make sure that that container image, you know Starts and has a web server listening on whatever port it's supposed to expose And then run that into whichever environment you're supposed to run So if you're on the master branch, you're gonna run that in your production environment And if you're on the topic branch, you're gonna run that into your test clusters, right? So that's like the CI aspect. There's also like a new trend That's a new set of tools that are starting to emerge those are like the GitOps tools things like we've flogs and Argo CD I didn't have time to Like include that in the demo maybe for some other time. So it's a different concept It's more like you have an agent running in your cluster and it will ask Some central git repo to to see should I apply something? So whereas the CI approach is your CI contacts your cluster in the GitOps Tools, it's your cluster contacts your CI So this is interesting for people who don't have public IPs. Maybe they are running The communities cluster in shops and they have dynamic IPs or whatever. So different, you know tools that you have to dispose Right. Yes. So that's it for the tools now Now it's the demo time So all the source code of the demo is available at the URL below gitlab.com slash multiclown dash open stack that open stack dash k8s Have a look there if you want the link is in the slides Right. So I talked a lot about different solutions different options, but for the demo you have to choose something, right? So I selected DNS round robin because I don't have access to a CDN and you know, this is what I had access to I used kubespray I used terraform and for terraform I use the recipes or the plans that are built in kubespray GitLab CI with what they call the auto DevOps feature and that's basically a wrapper around Docker build helm install So if you're using any other CI tool that's cool, you know, just Do something like that and the demo if it works runs on 27 different data centers Operated by 18 different companies and they run open stack from version Havana, which is like Three four years ago to Rocky, which is like latest release So kind of to show like it's a huge variety of data centers Different versions and the idea is if there is a bug in one version that affects you Hopefully doesn't affect everybody, right? Again, the source code is linked there I need to give a big thank you to all the cloud providers that are participating so all those different companies are providing some resources to make this demo happen and Big thank you to them. Otherwise, without them, it wouldn't be possible right now Right, so I'm gonna switch windows and I hope it will be visible in the back so if you go to the link that was in the slides where the The source code is you have four repos One called Docker Cube Spray Ansible and that's basically a utility image Nothing special in there just pre-installed tools to make the CI pipeline go faster Then there is auto deploy app. That's a git repo containing what's called a Helm chart That's a fork of the upstream git lab one and a couple of modifications to make it Multi-cloud compatible the same so very few Modifications there and then the two most important repos are app and clusters app is my demo app and this is just some hello world thing I wrote and Clusters is the repo that handle the infrastructure as code management of all the community cluster across the 27 locations and some other things right so Love the token That's fine The app. All right, so this is the hello world app It just says hello world and it shows the city that is serving the Request so right now we are hitting gravely in and that's in an off-of-lance and If we like refresh it might hit some other location and I should There is some feedback going on with no So like I said since it's doing DNS round robin it's completely up to the client What happens with those things and on Mac natively it really sticks to the one Record it has so it was sticking to gravely in my VPN through something else and Now we're hitting Amsterdam so DNS marabin is really just for the demo Probably not a great idea to use that in production is especially in this example since you have some invitations All right, this is the repo app that contains a Docker file and up to pie Docker file is really simple. We just Install the requirements from pip and then we run the unicorn with our app The app is just few like what 50 lines or so and talks to some API to fetch those nice images based on the Location and then shows And the HTML comes from this templates folder, which is using Jinja All right, so All right, that's that I Will go over to index of html and Here I can go hit edit And I will change hello world from to hello falls them 2019 and I will commit into master which you should not do but Yeah, we don't have time for the demo to run a topic branch and then merge that later to master so, you know Hello falls them. At least we have a commit message Yeah, so Just follow your development workflow there, you know topic branch and then merge requests Second pair of I for review and then merge that but it's a demo Then I'm gonna hit the pipelines It'd be good if And then we have our GitLab CI going So we're running Build and test at the same time. You probably want to like do some pre-flight checks before I can have narrowed down the Pipeline to as short as possible to have a demo of reasonable time and then We have all the CI jobs to deploy in all the different locations, right? And I made a small web app to visualize that so currently everything is green and then things will start to change colors as we Have our docker image of a build and we will start to deploy stuff in different locations This takes a few minutes. So keep that in mind and I will demo another thing while this builds All right, so we're not stuck here for a long time So I will try to move this over here and then Resize this like so All right, so now we'll talk a little bit about the Cluster management repo so there's a cluster of repo that have Really two scripts up to the sage and download a sage up to the sage Boots up a new clusters with terraform and then runs kubespray to some communities there and download a sage destroys everything in that data center really simple stuff and Then we have the CI there as well Yeah, so you can see on right-hand side that stuff is being deployed as we speak, right? So the pipeline for clusters is very simple on the left-hand side you have CI job to deploy clusters and on the right-hand side to destroy them so I just click this button to Re-run the CI there and install it and this one to destroy it All right, so stuff is still being deployed now. I will show the The Cuburals thing and how we can see the Kubernetes API So I'll come back over there in clusters repo there is a Folder called kubros and then we'll just do kubros sage This will run kubros locally on my machine, but you should really run this as a web service somewhere And I'll copy this We'll open it That's no good. All right, let's disable this maybe the modify role, right, so Then I hit the single sign on or the IDP identity provider here is get lab.com and you ask me Are you sure you want to give a talk into the Kubernetes demo? Yes, and Then it tells me everything is good. You can download the config file Then I can move the config file from my downloads folder into Scott but it's dot cube slash config Then I can do cube CTL config get context and We can see all the different data centers we have then we can do keep CTL Get pods before Minus an app to select the namespace in Kubernetes and dash dash context London So this we should see the pods running in London and now I can see let's see the pods running in Amsterdam And this is you know all from one CLI. I don't have to like configure too much And something we can try is to delete this pod Here and I've set some or back policies so that this access is only read-only So it says no you cannot delete pod and this is just an example of or back policies in Kubernetes So you could set this guy is allowed to delete stuff and this one is not Anyway That's the Cube CTL like gross thing and now we can come back Here and we see all our deployments are good and We can see it's all green Perfect, and if I refresh this now it says hello Fosdam in Oslo and Stuff I guess and now we hit Berlin and you get the idea San Jose and stuff like this I kind of focused on Europe But yeah, we have data centers in Tokyo and stuff like this as well And that's why DNS around Robin cat of socks because the day you the time when you hit Tokyo you get very high latency But you know that's that and the last part of the demo we still have on nine minutes So I will do that is the other talk and I can close that San Jose data center. I will just Go here And I will just destroy it So I could this button and And now it starts a CI job that will first go to my DNS provider to remove the DNS record from the round robin Then we will wait for 60 seconds for the TTL to pass and then terraform destroy and the cluster is gone and Then we can refresh the page and the client will just move on to another provider Now we're waiting for the DNS propagation And that takes a few seconds like I said And that's kind of that I'll just Finish up the slides. There's just a conclusion slide because That's where we have time for questions Here Wrap up so to wrap things up Cube fed v2 is coming and it's kind of a pain the situation where v1 is out and v2 is not is in progress Open stack interoperate or inter op is really hard I use web 18 provider or something and each of them is kind of unique in their own way And it's really difficult to identify all the ways they are unique and Manage those things. So some guys will have firewall rules for you. Some guys will have Support for neutron routers some want some will have support for floating IP some want some like different operating system images They will bake in some stuff. It's the last slide. So, you know, you can wait five seconds some will have Custom operating system images. They will not use the upstream Ubuntu image They will just build their own and with bunch of stuff in there different VM sizes And also keeping those things over time. Sometimes they just change change IDs or change version of this and that and it can be tricky so if you want to do this multi-cloud thing just think about If you're gonna work with exceptions or using the common domain denominator across all your providers or all your clouds and That's really like per cloud per application decision Too many exceptions will add a lot of work for you But the common denominator might not be good enough so you have to look in a case-by-case basis and Kind of my conclusion is we need to start looking at cloud providers as cattle I kind of before we would treat servers as pets where we'd have our pet server and you know We would fix it, but now it's time for all clouds to be like that. Oh, Amazon is down I just destroy everything I have there and move on to the next one I don't really care about one specific cloud provider or not And that's that's it for that links to the slides and thank you for your attention Thank you very much