 Hi, welcome to the Jenkins Cloud native sync. This morning, we had an excellent meetup and demo and discussion on the TechTown Client Plugin by Vibhav and Gareth. So that was really amazing. That will be online likely very soon, if not already, on the Jenkins YouTube channels. And this can just be a follow-up discussion as well as a discussion of some things that were mentioned in that meetup like Mink, which I have many more questions on. And by all means, please ask any questions you have about any Cloud native Jenkins topics, but especially anything that we've been discussing recently, let's say around Cloud events or the TechTown Client Plugin. I'm actually just looking at Mink right now. So you asked how we could probably use this. It seems like it's a controller of its own. So I'll just share the link with you guys. Great, thank you. I don't know how it could be used exactly because the only thing I know about it is what James Tratton told, which was Mink was able to build multiple images in parallel. Yeah, I mean, it's something that I wasn't really aware of until James mentioned it this morning. Sorry, yeah. It's just interesting because they're quite ambitious on their repo. They said, the goal of Mink is to form a complete foundation for modern application development, which is simple to install and to get started with. That would be fantastic. And I'm just wondering how that complete foundation for modern application development, so do they mean the entire application lifecycle? Yeah. And then would it be something like James was saying how it would be interesting to have something at that level and he was thinking cross-plane. And I'm just wondering how all these tools either play off each other or work together or how would one compose them? Is it like a wrapper for Tecton? Is that the idea? A wrapper for Tecton and Knative? Yeah, it seems like it uses Knative and Tecton to do what it does. I'm still not sure. I mean, the Mink CLI looks sort of very similar to the TKN CLI in terms of what you can do with it. It's very cool. I'm sorry, I kind of derailed already at the beginning of this discussion on this. Similarly, I'd never heard of it before. James Strackin mentioned it. Although I looked at the link they put for the demo and I actually thought I'd put more. And James Rawlings is in there. I can see his image on there. He's sitting there watching the presentation. I'm going to spend some time looking at what Mink is and see how it integrates with our stories with Tecton. I think some good demos would be really handy with that or repose that you can play with. I was thinking about doing one sort of Helm file with like that CI Jenkins.io repo that I have, but cut down so get rid of all the stuff that we don't need and just have. So avoid using a custom image, but just have the bare bones kind of stuff that you would need to run Tecton and Jenkins together. That would be really good, including the workload or the role-blinding. That would be nice. But even once that's there, but having some example projects that you could import. So this is a repo with the Jenkins file and some Tecton resources. That would be great. But then the more, I suppose, more advanced stuff, so stuff using the users syntax. That would be really nice. The user syntax without having to have a GX installation would be quite cool because that's why I kind of fell over on it this morning. But we should be able to do that. The user syntax doesn't, that comes directly from Tecton. Is that correct? It comes from Jenkins X. It's like another pipeline inside. It's the GX effective pipeline stuff that manipulates the pipeline when you enable it and can read a pipeline from another place, which it would work out of the box on Jenkins and Tecton sort of installation. But if you want to reuse the GX pipelines, you actually need a GX installation that's there because it assumes that all your service accounts are set up in the right way and that you have the right CRDs in the cluster so that it can get the correct SEM connection and all that kind of stuff. Whilst there, it's really just a way of controlling. At the moment, it's a way of controlling Jenkins X pipelines from Jenkins. It works really good with that, but if you want to use it in a non-Jenkins X scenario, if you want to replicate what a pipeline library is doing or a shared pipeline library would be doing, that would be a really good example to have. Because it should work. Nice. I agree with your first point too, that I'm the importance of having sort of example, example, not quite tutorials, but example little demos for people to spin up themselves. Any time you're asking for those engagement, it's nice to make it super easy for them to get into it. And then hopefully, I can get more involved too. Yeah, the other thing I was thinking is it's almost like a quick 101 on how to debug when a tecton pipeline doesn't start. Oh, I think there was a talk recently on debugging your tecton pipelines. Oh yeah. Yeah, I gave it off on that, so it was Vincent and I who gave the talk. But it was about a new feature in tecton with which you'll be able to stop an execution of a pipeline, a task term, and you should be able to kind of hijack a container and then do some things over there and figure out why stuff went wrong. And that's the kind of debug it was. But I think the debug that you're talking about is more of the YAML kind of debug in a way. Yeah, so quite often you might get a task run that's been created. And for some reason, the pod can't be created from that. So you're trying to mount a workspace or a shared volume of something or there's a secret that's referred into there. So at the moment the feedback into Jenkins isn't brilliant. It's quite good, but quite often you just get a null pointer or you just get an exception that's thrown and it says I can't find this pod. What we're not doing is we're not displaying the status coming back from the task run that actually failed inside and then linking it through to the pipeline. So there's definitely some, either improved logging that we do or we need a little 101 on how would we expect someone to go and debug this if they've never dealt with it before. Does it make sense to do some pre-checks? Pass me or whatever pipeline run task run we are using and extract all the volumes that are mentioned there and then just query them once before calling the tecton. So creating the tecton resource would be like a step we do before actually creating and at that point it can fail before actually creating the tecton resource by saying that this resource was mentioned in this task run, but it's not available on the cluster and it's not working or execution. Yeah, so if it fails when the pipeline run is being created then I kind of, because you get quite good validation at that point that it's working. So if the pipeline run fails to create, it's got one of those, like a mission webhook type things that validates it and then checks it's all there. So you will get that feedback and you should get the status and the reason from the extra state fields back into Jenkins console. But does the webhook validation check like if the resource is present? Like if a secret is present? This is something that might have to be implemented on the tecton webhook side. Yeah. I'm not sure if it does that at that level, but I suppose it would need to know whether a secret exists and you have permission to read it, but I think it would be like really nice feedback to be able to give that information. Would you, I love you, like to do a demo, a walkthrough of how to debug this. And not necessarily right now. I mean, like, once we've worked out how to debug it, then there's also issues at the moment. Yeah, that would be cool. What was it? Should I add the HackMD that we have? I have it open here. Oh yeah. Great. The other piece I noticed, I was running some of these pipelines the other night from my Airbnb with a very slow internet connection. And they were taking the first time you pull an image down. They were taking like more than 60 seconds. So even though the task run was kind of in pending state, the pod hadn't started by that point. And Jenkins was saying that it basically the pipeline has failed because it hasn't failed to start within a certain time out, but that's not necessarily the case. It did eventually start and recover. So there's definitely some improvements we could do to the logging like the way that we log on there. How would you, how would you go about looking at improving that logging? Do you want to talk about it? I think at the moment what we're doing is we, I don't, actually I don't, so I've only, my test cases that I've been playing with, I've only really been around pipeline runs. I haven't done any sort of task runs directly. So it may be a bit different, but if we're, we spin off a, we get a pipeline run and then we get the, we kind of get the UID from that that comes back with. And then we're looking for task runs that have been created by that, I think is the correct way where, where the owner reference is set to the right thing. And then we loop through those and try to discover a pod for each of those. And then we kind of wait for it to start. And I think we just need some, which I think is probably the right way to do it. Although it may be worth looking at the logic that the TKN client uses, because that actually seems to follow it quite nicely. Like it pauses, wait till the next one to start. And even the way that it, you know, prefixes the name of the, kind of like the task run and the, I think it's on the container, or it might be the task, task in the pipeline or something before you, in the log output. So you can see exactly where things are coming from. Maybe we should use like a similar like lock format for that. I think that would be quite cool. I mean, you could even like use different colors to prefix the, each pod or each container and that's, that's how, that's how TKN does it. Just to make it more obvious when it's kind of switch control from one to another. I lost you about the control switching part. This is about switching control from Jenkins to TKN. No, so you see this is just, this is for the, so the way that we're streaming the logs to the console at the moment to the Jenkins console at the moment. I think the logic isn't, it's not quite right. And there are certain instances where it thinks that pod has not started or has failed because we're waiting for a pod to start rather than checking the status of the task run. So there are instances where we loop for 60 seconds when actually when you look at the state of the pipeline run, it's already failed. It's the, no task runs have been created for that point. So I think it's just tweaking the, the kind of looping logic that we have there. And then the other part of that was when we go about writing to the Jenkins log to prefix the log statements with the sort of same information that the TKN client does. Because that's a really nice, like simple. You can see it's like it's gone from this part to this one to this one to this one done. Great. Right. I don't know anything about that anymore about that. And the color code is you can, you know, your eyes. Right. That's a block together. Yeah. It is very, very, very clean. I think we could do something very similar to that. I do you like, like how you're given like the tecton. When there's a tecton log in Jenkins console. That was, I put that in as a debug statement initially just because I couldn't, I couldn't work out where the logs are coming from. And I found it was really useful. I liked it just to be able to see it. What would be nice is like, once it shows tecton, maybe something like, like you said, how TKN CLI does it, like it shows the container, not the container step name, and then the log itself. Yeah, is it a task and then the step name? Is that what it shows? It shows. It shows the step name. It, yeah, it just shows the step name. In a square bracket or the, but when you want to check the container, the container is named as step dash. I mean step name dash step. That's how the container is named. Yeah. Right now we are using owner references directly like we are checking if there is an owner reference for some code and if it has the task and owner reference and based on that we are holding it. Yeah. We're not checking for status right now. It would be really nice to get some like tests in there that cover like a multi pod pipeline as well. Because at the moment, all the ones that I have are just single pods. It's just like simple hello world stuff. It just checks that it runs. I think I've got one for like a slightly delayed startup. But that's about it, but it'd be really good to have those kind of edge cases and more complex pipelines in there. These tests wouldn't be unit tests, right? They'll be. They'll be kind of end to end tests. Yeah. Although they do run as part of the J unit, normal unit test phase within Maven because it's using that. It's using the fabricate mock Kubernetes stuff to do it. I've noticed one thing that in the fabricate objects that we sorry fabricate, I don't know if they are the classes. They don't have all the parameters given. For a seer. Like some of the new parameters are like missing. Okay. So that's, that's why creating. That's what that was like one of the reasons also why we dropped the custom create custom tasks that we had before because and it seemed like it's a better choice to just do young at that point because the young is directly then passed to the stream or to the controller. I'm assuming that that fabricate test client or whatever is generated rather than handcrafted. Is that the case? I'm not sure. The test client or the main library as well. So I suppose both really the main library with yeah well I suppose there's the tecton. There's the tecton client sort of main library thing as well. The CRDs especially. I'm not sure if they are. I mean, let me just. I'm going to check that see if they have something on. I haven't checked to see if there's an update in the dependency version actually then might well be one as well. Was there any like non tecton client stuff that we wanted to talk about? Yeah. There's been a lot of tectons. Yeah, we everything I guess my main update would be everything is going extremely well for you thought. And we will certainly have a well. I don't think I can say that it's all going well. There will be cloud native stuff happening announcements are Monday. We'll leave it at that. And I think that the cloud events, whatever we choose to do with cloud events integration with the tecton client stuff will be really good as well. That could be very that could be really interesting like a pipeline, you know, triggering pipelines or just just even getting notification that multiple stuff has run. Like a deployment or whatever is taking place. Just, yeah, in the tecton logs, you get all of the cloud events stuff in there already. You can see it. It wants to look it wants to send it somewhere. Not sure why. Do you think that am I. Okay, do you think that it would just be a matter of installing both plugins and we'll set them up to work together or will this evolve into something that naturally you would use them all together. So we'll make them one. I'm just curious. I think I think you'd probably keep them. They're probably separate plugins. Yeah, I suppose in terms of how it works. Because cloud events you make one and not. I mean, yeah. Yeah. I think the difficulty with cloud events is going to be when like when you've got like two things you want to integrate. That's really straightforward. But then as soon as you start having more than two. And are you configuring things on like a point to point communication child rather than like I want to like have something that you can broadcast events to and then multiple things get those feeds. That may be one of the I know we used. It's not really it's not cloud events but it's the same similar sort of thing in SDM for using segment as a method of doing that but what actually happens is it has this like reliable web hook delivery but only if the first receiver or only if a one of the many receivers is a successful sort of receiver message. So if like one one passes and the other four or five fail, you never get a retry on those others. So it's and it's a bit like you'd have to and I understand why it's quite a difficult thing to do. Like if you've got like 10 or 20 or 30 different endpoints that you're trying to relay to remembering which ones have accepted which messages and which ones you need to store up for who and I have like different retry lists basically. It's a scale it's it can be it could be a lot of data. This is a very this problem is going to be the main problem on solving when it comes to cloud events because like like the initial stuff is pretty easy. Like if you want to implement the SVP request stuff like in through which cloud events are going to go through that's that stuff is easy but how it will work with like you know scalable infrastructure that's going to be a lot of setting up stuff and trying out a lot of different things. Yeah. What has if either of you know what has the cloud events say in the sense of have they been I would imagine they're thinking about this problem and do they have any forming best practices or particular practices that they have. I think you know this but they do have best practices. I haven't joined it but it'll be I think it's high time probably should join at least one of us. I did join the events take last week or was it this week? Oh I meant I didn't mean that I probably said that wrong. Oh I have no idea. Yeah. Yeah. Yeah. I owe too many meetings. Yeah. Yeah. I'm just sweating thinking about the meeting. Yeah but it makes sense to think about like the best practices around them and see like how people are already doing it. It's it's probably something we'll have to look into. But. So I joined. So you were also there. I joined the events take or CDF. And though there's this work on building a mini prototype for how cloud events will work. And I think our feeling go slowly the prototype might as the discussion on the prototype begins. There might be also discussion around how this can be scared. And the discussion kind of already had started when four keys was being discussed because like how can you manage so many requests at one time and it was probably easy to do it to victory. But it might not be as easy to implement something that can do something similar on a smaller scale. But yeah, it's going to be something to think about in the next few weeks. What are your best resources for considering this problem? Like are there any go to learning resources that you prefer? For cloud events. For cloud events but understanding this larger problem of having. So you have a like a data driven like an event driven system. And then and how do you handle that as a scale? So like how do you handle all the where you're showing your events? How you're ensuring that everybody who subscribed are you ensuring it? Where are you putting the responsibilities for that? Do you have resources on who is working in that area? What books are good? Things like that. I actually haven't worked with events that much. I've only like worked with them like on a very prototype. And that too just with cloud events. But I would with this prototype that we are doing for events. I'm hoping to learn more. Yeah. I'm hoping that, you know, I get more used to eventing. Architects itself. So I'm looking forward to the weekend. That's fine. So I can just put up a new POC. Nice. But what about you? Like when it comes to eventing, eventing architecture and stuff, like how do you like, you know, get things in perspective? Because it can be overwhelming to think about like, you know, streams and stream processing and how everything will work together. Yeah. I mean, for me, the place, or the individual I go to to try and understand more is my go to for that is the work of more and flat men. Because, yeah, he just explains things very well. But even with that, it's so broad and there's so many possibilities that, yeah. It's, it's, it is really, it's really hard determining like what would be the best practices and what would be the best practices. And it feels like a very quickly evolving space, which is exciting. And then it's, it also feels nice that way, because you're realizing that a lot of people are still trying to figure this out. So, you know, there's that sense of being like, okay, we all are kind of working together to try and find out what would, what would actually in practice be most scalable, most reliable, things like that. There's also like the scalability aspect of it all is nowhere. Like initially if you start something, start a new project with this kind of stuff. There's loads of different ways you can do the same type. You know, if your data set is relatively small, it doesn't really matter. Like optimizations of it's not going to make any difference or like it, you might save a second or so. But as soon as your data sets start to get bigger, it becomes like really difficult. Like when you can't process it on a single machine, it's like, it's a problem. Like even just like the Jenkins open source statistics, like you can't process it on a, like a brand new MacBook Pro, it's too much data. Right. So you have to treat like how you handle the data and how you process it in a different way. And that's when it all becomes important. And I guess something like, not that we're using it for that, but BigQuery would be really great for processing large batches. Yeah. It is really good for that kind of stuff. It can be very expensive. Yeah. Yeah. They have, they have like quite a nice charging model where you can go and, you know, in MySQL you can go and have a look at, like I can go and do an explain on a query and see if it's correctly using the right indexes and stuff, but generally it doesn't really matter. But you do the same in BigQuery. It tells you exactly how much these queries cost each time you run them. And it's like, I could tweak it slightly and I could save myself pounds and how, you know, dollars each time you run that query. Just by reordering some of the criteria sometimes as well. Are we talking about BigQuery? The last two days one, which one did you say? Yeah. That's what four keys is based on, like the demo that I've never used BigQuery itself, but is it, it's a data processing tool, right? And it is GCP. So, yeah, that's, I mean, they're doing a nice job of that. So four keys is open source, but they're showing it on, you know, very GCP specific platforms and tools. Yeah. It's just GCP. You can just like use BigQuery with the other stuff. It's already there. So when we talk about like cloud events integration with Tecton, this, I was initially thinking that we could probably start with like an event listener thing. And then as the cloud events, so plug-in project goes forward, we could think of how to integrate it. But I think a good place would be to start with, you know, being able to just create an event listener in Tecton or when you had mentioned that, Garrett in the, what's next part? Like were you thinking of something similar? Yeah. I mean, I was looking at the fact that because Tecton is kind of, it has these events inside it, you know, internally anyway, it would be really nice to try and get them out of there. Like try and either send them to something. Or I think, yeah, try to try to get them, yeah, get the events out into something that can process them or be like probably initial, a good initial step. It would be, it would be, it would create quite an interesting, like I suppose once the cloud events plug-in for Jenkins is there, you've got quite an interesting, you know, circular thing going on where they could be, you know, jobs completing inside Tecton that could trigger other things. You could trigger jobs in Jenkins and more jobs in Tecton and, yeah. Yeah. I just keep getting confused between if we should use the Webhook Handlers given by Jenkins, like at what point should we be able to tell the user to switch, like use the one that you already got for your Jenkins job, which is there, like they can trigger it using that or like use the one given by Tecton. Like when we probably, probably it could be like a chain trigger Jenkins job which then triggers Tecton. I think people may be interested in like different things as well. Because I think there's a nice feature in the Helm chart for Jenkins where you can create the, you can have like Jenkins as being private but have a secondary ingress rule set up automatically for the Webhook handler. So the, you know, you're running your cluster on a private network that you have to have a VPN or something to get to it. But the only thing that's publicly available is the Webhook and that's quite a nice way of doing things kind of stuff. And I can see people wanting to do that. Like, yes. This isn't the Jenkins Helm chart? Yeah. Yeah, there's a secondary ingress feature. We actually use it on the Jenkins Infra to do this, this exact thing where we hide, hide clusters behind a VPN. But the only thing that is exposed is the secondary ingress. And because you get, like, on Azure ADress, wherever you get a different IP address, you can put firewall rules on to just allow that traffic through. Yeah, I think that's fine. Yeah, that looks like it. I mean, you can't even have the different ingress rules. You can have different certificate authorities for each of them. So to clarify, this is the ingress used for the Webhook? Yeah, it's turned off by default. But when you enable it, it creates a secondary ingress to work that through. I would always create one ingress with one. And then I should follow up creating a secondary ingress with anything. But yeah, this obviously is possible now that you've done it. This is so gone. Sorry. No, mine was tentative. So continue on your current discussion. I was just going to say that as long as the, like the annotations and labels and stuff are exposed as well on the ingresses, you can do a loads of clever things with them as well. Like if you wanted to put this behind, like, I suppose any kind of, you know, service mesh or something like that, you could do that. I haven't bled around with a lot of ingress. I think it's time to be able to go. Go on the car. My quite tentative question actually is, what do you have, what have you heard about and what you know about, what do you think about the secret store CSI driver? That's sort of, I think it's, I'm not sure it's actually in Kubernetes yet. It might still be under. So I don't really know very much about it, but I was super interested in that. So this is the way that Kubernetes stores its secrets internally. Okay. Okay. Okay. Okay. Thank you. Yeah. Yeah. Yeah. Sorry, I didn't get you. The secret store CSI driver because they, I don't know if they're extending then the functionality. They are talking about it as if I can help consume external secrets. And I don't know if they're beginning to extend the functionality into something more like what GoDaddy's, Kubernetes external secrets does, or I just, it was different than I've heard it talked about before. And I wondered what you knew about that and thought about it. So the, the external secret stuff is, I think of that as a way of replicating secrets into Kubernetes. Right. So I want to take, I want to store them externally from Kubernetes. And I want to get them in and, and use them. But once they're in Kubernetes, they're protected by standard Kubernetes cell back. Yeah. And, you know, role control and just it's, there is no real encryption. Maybe they are encrypted at the disk level, but there isn't really, it's, it's, you know, base 64 encoded stuff. Whereas the, I think the secret, the thing you were referring to, like the CSI driver thing is a way of actually like storing, rather than storing them, like in that way they store them in a, like maybe back them off into something like a KMS or a vault style solution and store those secrets in there, which might, yeah, that might be an option. Yeah. Yeah. Because of the way they talk about it, it's handed a lot. Like the secrets or CSI would be like the driver hubs. My understanding is that driver hubs bring the secrets from outside an external secret store. And so they integrate them with Kubernetes. Yeah, it looks like you can, it allows you to mount them. So it sounds like if you've got a, like a, you know, some kind of KMS system in there. That appear as secrets, but what's so, but only available when you mount them onto a pod. Which is quite, that's, it is quite nice. The bit that always kind of interests me about these is like, how does it work? It's how, how would this work in a cloud, you know, Kubernetes provider? So I'm guessing that GKE would enable Google secrets manager somehow as it's back in store. Or maybe the KMS stuff actually, because that could be, that could be another option. Because they have got like cloud KMS. That type of system. Sorry, that was a bit of a tangent. No, but I learned something new today. So because, because for some time I was thinking like I, and now it like makes, makes a lot of sense why you would set up KMS, who was KMS with, with this. I was thinking before, like I have some secrets for like a thing I'm working on in in GKE, like not on KMS, but like how would I, you know, use them on GKE? So one of my friends asked me, I was, I just had no idea what to do. So I do this quite, I do this quite a bit actually. And I had to do it the other week when I broke my cluster and then had to recreate it with everything. But I, I use Google secrets manager to put the secrets in and then deploy Kubernetes external secrets to, to configure it to replicate those secrets down. I'm providing your, you've, you've got workload identity enabled and you have a, a service account in there that has got permission to read a secret. It just replicates down and so it appears as a Kubernetes secret. And each time you go in and it polls as well. So each time you go in and update it through the UI, it will replicate down into the cluster, which is, which is really nice. And obviously if it's mounted to a volume, it's going to get an update that the secret is updated. So it knows that it's changed. So you'll get all that kind of stuff. If you're doing it with Jenkins, you can, there's a Kubernetes credentials provider plugin that will then take the secrets, the external secrets is created and replicate them into credentials within Jenkins, which is really nice. So if you want to, that's what I do with my, my Docker token and those kinds of things is like store them into store them in Google secrets manager, replicate them into Kubernetes secrets. And then they go into Jenkins that way, which means that out of the config, like it doesn't, it's not in the gcast config. I don't need sops. I don't need to have all my GPG stuff. It just kind of works. We use this with OpenShift sync plugin as well. So the secrets are synced as Kubernetes credentials. And those credentials are used for whatever is being done by sync plugin or client. But yeah, this is very neat. What do we have to use Kubernetes credential provider at some point for the account? I'm not sure. So credentials is something that I haven't really, I'm not sure the best way of getting them into a pipeline run. Because you, you don't really want to write them as parameters. I mean, that would be the easiest thing would be just be to do that, but it's pretty insecure because they're just available in plain text then. And they're probably not going to be masked in the UI, but you, yeah, we probably do need a method of, certainly if we're running, if you're running that create raw step task, what about insider with credentials block, that could be quite a cool use case for that. Like how would you then do you set up the credential or the secret in a way that can be passed down. I didn't get you. What would be very cool. So like in Jenkins pipeline, you have you have this with credentials block that you can use, where you can, you can basically load a credential. If it's a short lived credential, it will request a new, a new kind of version of that. A good example is to get her back thing. So you get a short lived token. And then it's passed. It's available as as environment variables to whatever the steps I run inside that block. So we would want to have a bit of a story about how we, how we would use with credentials and then put the text on create raw inside what that actually does. Let's do we pick up that we know that we've loaded a secret or however, how would we reference a credential that we would need to pass to it. In like months ago, when I was just starting off with the plugin, I had created a story. I don't know what I was thinking, but I just wrote that support for quality credentials. But I was thinking if we could, you know, kind of help and some kind of help with the service account stuff or like there are certain pipelines only certain people can execute. Sting in those terms and that would kind of map out the credentials. So I think, I think the Kubernetes credentials only provides like it takes from Kubernetes and imports it into Jenkins. I don't think there's any way of like doing it the other way. So taking a credential from Jenkins and making it available in Kubernetes. That would certainly be a good kind of use case for that. I will have a bit of a search for those plugins to see if something exists. Would you be able to update the issue? If you end up doing the research, I just shared it with you. Yeah, I've got that on the chat. Cool. I don't think I have anything else at the moment. So I was just thinking it's been a good chat today. Any other questions or topics to discuss for today? Okay. All right. Happy Friday. Have a very good weekend. Thank you very much. Bye.