 Hi everyone, welcome to the Jenkins Cloud Native Special Interest Group Meeting. Today we will have a short sync up in order to talk about the ongoing projects and about the updates we had at the Cloud Native SIG meeting we had at Jenkins World Contributor Summit. So I'll screen share for a second, you know, to show the agenda. Do you see my screen? Yes. Okay, thank you. So, yeah, if you're interested, we have a GitHub chat, it's Jenkins.com. You can just join this chat and you can find all the links here, including participant link and also a link to the status document. So during the meeting, we will be making notes in this document. And yeah, we will update accordingly. So today we have three items in the agenda. It's update about Cloud Native Jenkins and Special Interest Group. Then we will talk with Alex Norland about external configuration storage work and the current status. And then together with Carlos and JC, we will talk about ephemeral Jenkins. It's an item we were presenting at Jenkins World and I think it would be great to discuss it here and to get some feedback. If you want to propose any other topic, just edit to the agenda and if you have time, we will cover them. Otherwise, we will schedule them to the next meeting. So today we have almost full room of participants. So we have JC, Carlos, Alex here. Also there is David Curry and Matthew. I believe it's Eurik, right? And we also have Zak on the call. So if you're interested to watch, you can just join our YouTube channel. There is a broadcast going there. So if you're unable to join the room and too many participants, you can just follow the meeting on YouTube. So that's it from me, Carlos. I believe you can just start and present Cloud Native Siege updates. Yeah, so thanks, Solik. Last week we have Jenkins World, where Kosuke announced at the keynote the Cloud Native Siege and efforts that we are working on. And we also got in his keynote, he also covered the five big projects we're working on. So I encourage people to watch the keynote, so I think they are live already. Sorry, there is effort from somebody. I just mute PC. Okay. And Oleg and Jesse gave an update during the contributor summit. I'm going to share those slides here, and we can quickly click on them. Yeah, thank you. Thank you. Thank you very much. I'm going to share those slides here, and we can quickly click on them. Yeah, I'll put the link to the keynote. Well, everybody here should know what the Siege is already and what we are trying to do. So let's go through the updates. We have also the keynote, the appearance of Google and Microsoft. Thank you for showing the support there. And they are part of the Siege, and they're working already on the Artifact Manager support for their respective blogstores, at least Asuris. So we'll be helping implement these features, and we are looking also for other active participants in the Siege to implement different versions, both for the Artifact repository, external Artifact management, and also for logist storage. So... I do have a task to split apart the Artifact Nine Interest 3 plugin. I've not forgotten that. You mean split into two plugins? So... Well, Plugable Storage, obviously the goal is to remove everything that is in the Jenkins home, being Artifacts and logs being like 90% typically of what the disk is store is used there, and that way making Jenkins more resilient and more scalable and also faster to when you need to restore things. So we wanted to be transparent with no configuration, so you don't need to archive your Artifacts in different cloud providers. Just use the same pipelines for both, for whatever backend you're using, and obviously scalable using blogstore services provided by the cloud. And we provide APIs and reference implementation on S3 right now, so that people are welcome to include, to create new implementations. So I guess the update here, the most interesting is that, I mean we have the Artifacts implemented, we have build logs partially implemented for CloudWatch logs, configuration, and there's some work in progress there. I think this is the XML replacement, right? Yeah, we don't talk about it later today. Credentials using Kubernetes secrets, that does the implementation that is already there and works really well. There's also some ongoing work with test results and initial work with code coverage. And system logs, you can use Fluendee and a Kubernetes cluster to send all these logs into whatever backend supported by Fluendee and that's a very low list of plugins. Okay, we already talked about Artifact manager for S3 before, that's already available. It uses JClouds, but we've heard some people asking whether JClouds was a requirement and it's definitely not. At the moment where we started this we thought JClouds would be nice because it will allow people to build other implementations quickly. But even there's no support on JClouds for pre-signed URLs and we use that for downloads of Artifacts from the agents and to also have links on the UI and some other places. We actually have to add extensions for JClouds and S3 and you would have to add extensions for other providers. So the JClouds finally was not as big of an advantage as we thought at the beginning and so it's definitely not something that it's great. If we were to do it again today it probably would remove JClouds and just directly use S3. Yeah, I've also heard that there is an ongoing work for Artifact Manager for Asia. Yeah, there is. So the APC could provide some status on this topic if she's interested. For the credentials the API extension existed already for a long time and the credentials provided that's something I mentioned which also opens the door to use Kubernetes Secrets and in turn Kubernetes Secrets can be implemented in different ways like using HashiCorp's vault. So it's really cool that you can collect multiple credential providers to do that too. There's also another implementation that we have of CloudBees for Severarc. That's Propeter, that gives you an idea that there's definitely room for multiple implementations. For external configurations and XML we are going to talk later. Alex, right? Yes. For test results Alec, do you want to comment on this? Yeah, maybe it's rather juicy. Yeah, that's something I was working on. That's pretty much experimental at this point. Where I left off it was able to successfully upload test results with full with at least a reasonable level of detail to a SQL database as a first draft. And it could do that directly from the agent JVM without going through the master and it would set the build status correctly so the build would be marked unstable or appropriate and all of that. But I just started playing with having the classic UI and or BlueOcean to actually display those results again from the same data source. Thank you. Does BlueOcean already work? Maybe I missed it. BlueOcean is as far as I can tell is calling the same internal APIs as the classic test result display. So that's not ready yet. I have a few method calls implemented successfully against SQL queries. But there's a lot more that would need to be implemented actually replace the standard test storage. Yes, and another problem we have with this story that effectively we target the unit plugin and all plugins which implement the unit plugins API. But yeah, it doesn't mean that all test related plugins will be integrated with this engine, at least in the beginning. No, there's explicitly no support for Cucumber test results which has a completely different format. That needs to be really easy when we widely use alternative test storage. As far as I can tell, the vast majority of people use JUnit plugin were something compatible with it as is. There are some plugins which add additional metadata, most notably the Test Detachments plugin. That's something that I think could be supported in this model. They haven't prototyped it yet. Thanks. Thank you, Carlos. Yeah, and for the external bit logging what we are doing is we are making the agents stream their logs directly to the log storage backend. The logs no longer go through the master and that improves a lot the performance and the storage of data in the Junkies master folder directory. Also, they are not in the master at all. They are not even cache. The master whenever you go to the log view of a job it will directly go to the backend to fetch the logs. There is no storage in the master at all. And for that we have two implementations one for Elasticsearch one for AWS CloudWatch logs and we have a first iteration using Fluendee to send the logs to CloudWatch and then we will fetch them directly from CloudWatch, but I believe now it's all directed to CloudWatch, writing directly to CloudWatch and reading directly from CloudWatch. Right. You have a prototype rewrite that sends directly to CloudWatch. It needs some work because the way that the logs are organized does not currently doesn't line up with the expectations of how CloudWatch does sharding and that sort of thing, but I'm pretty confident it's possible. The main interesting bits to work on relate to the authentication using IAM which is a little tricky because you want to create temporary credentials for agents to write to just a single log stream or whatever it turns out to be and that's a little bit tricky but I think I know how it can be done. Yeah, we started with Fluendee also because it supports all the hundreds of plugins to different backends the interesting part that it's not covered by Fluendee unfortunately is how to get the logs from the backend so you still need to have a specific implementations to fix the logs and display them in the UI or do anything else with them so that's why Fluendee made it go away because to remove also the requirement of running Fluendee in the cluster. For elastic search, these uses log stuff, right? Yes and no. So the first types used LogStash but even in this case LogStash was pulling the data from elastic search. So elastic search access main database in this case and currently we just push the data to elastic search and pulling directly from it so no LogStash involved right now It's not a big deal, we had a prototype for LogStash plugin so effectively we could be able to send the data to all LogStash implementations but yeah the problem that plugin would need a significant update so for reference implementation we decided to go with elastic search only even in this case we still need to learn a lot of changes so pipeline changes are still under review we need them for both stories in order to deliver public releases. So now they're available for evaluation but there is no releases in update center and for elastic search we are also doing patches in Jenkins core and the elastic search EPI plugin and in order to do that we still need to finalize the changes and pass all the reviews so it's work in progress We'll say that the pipeline patches seem to be moving towards the last stages of review at this point We have the work we mentioned it's started or it will be started at some point about good coverage and other types of reforms that can be started in some specific so now native Jenkins well one of the things or the main explanation that Kosuke posted in his blog Jenkins Shifting Years that everybody interested in Jenkins development should read basically I like to phrase that he mentioned is it's a local optimum but there's improvements that can be made to pass this local optimum and make Jenkins even a better project so there's clown native Jenkins which is like the end goal would like to see Jenkins run on a clown native architecture on Kubernetes cluster using the newest the latest technologies using cloud services being more I like the stateless more modern and more well there's this idea behind how serverless things work how to use infinite scale and be able to launch jobs whenever you need them and they do not pay anything if you don't have anything running they scale down to zero and the other project that is on the way there is this Chalting Jenkins on how to improve Jenkins for existing users today and make the transition easier to the clown native world so clown native it's a cdng that runs on Kubernetes and has different architecture from Jenkins today uses micro services for instance for accessibility and part of the thing is we focus on the data manage services and also configuration as code configuration as code and I believe that other topics will also appear in our agenda at some point so if somebody interested we could expand the scope of discussions we want it to be I mean we want to bring evergreen on board having an opinionated distribution that works for most of the users without any additional configuration and based on what Jenkins X is building so all these different services running that compose your Jenkins cluster ecosystem and converse with what Jenkins X is offering so all these event handling all these separate services for your Jenkins masters agent running on Kubernetes all these things for a Kubernetes based deployment there is some work already being done where you can run Jenkins just like a serverless and we will also demo later at the end of this call and basically showing how you can receive events from GitHub for instance create a new Jenkins master on the fly that runs very fast and your pipeline gets executed inside that Jenkins but ephemeral Jenkins creates new agents and restores all the data that it's needed later on into S3 and CloudWatch logs so Jesse you commented really so maybe you have something to say about it I don't remember what that was about it was just yeah I mean this is an initial work so we want to show it to get feedback and shape the direction where we're going and let me quickly go to the next because I don't want to take a lot more time with this so we have single shot masters we use configuration as code to instantiate the master we use Jenkins file runner which is a project that allows you to start a master that runs a Jenkins file and then dies an idea and custom work packeter to build the Docker image that will execute this pipeline there is an article on Jenkins IO which describes how it's built and provides references to the implementation and some basic demos and I believe that at the end of the call we could run a more complicated demo which we were presenting at the conference yeah so this is a combination of different projects and there's other activities happening at the same time there is a Jenkins sex work there's improvements to the Jenkins file runner so yeah the rest I think it's just for the conference right so we planning more things to build this planetary Jenkins continue with the pluggable storage work and align with Jenkins X and other green so we built a unified platform that runs with this characteristic more like server less Jenkins so any questions you can ask here in the hangout if you are here or you can do so far no questions in editor the next thing on the agenda is it would be a configuration storage sync up Alex are you ready hello I don't have any slide and unfortunately there's a nasty cold going around the office so I haven't really got much time lately to work on things but the current state is it's seemingly functional I haven't managed to do any deep testing on it but when I added a savable listener that syncs the file to disk everything else seems to work as normal which at least seems to suggest that everything is stored there's a few areas where other plugins are relying on getting the actual file from the XML file so just abstracting away XML file didn't really solve all of the problems but it mostly seems to be limited to some plugins like the disk storage one and not surprisingly those would need to be patched so it implies that we will have a kind of caching logic for EPI so if you invoke XML file get file or something like that you should be getting the file but I guess our main problem is that some plugins just try to go to the file directly because they know where it's located yeah so right now I'm just doing the caching by having a savable listener and immediately writes the file but that's only going to work with a single master I've also re-based all the Kubernetes one I might not have pushed the latest version of it but I re-based it so that it's using the same one as I'm currently experimenting with and the QBify one yes that's one APIs aren't great but at least I have a good way of testing the basic functionality of it right now so it already runs in Kubernetes with this well I have tried the Kubernetes one I've just tried the Naive SQL implementation I didn't have access to a Kubernetes cluster up until recently and it's been booked for internal things so in a few weeks that will free up and I'm hoping to try it with our current Jenkins master to get one we have a lot of jobs and it would be fun to see if something breaks it would be cool to see the results so if you don't mind I'll share the screen so I'll show some context about this story do you see it? ok so we have a configuration as code sorry not configuration as code configuration storage so it just the documentation and vision how it would look like so currently this JEP is under review so Alex if you have any comments about the implementation and what could be changed there it will be much appreciated so technically you're more than welcome to just take over this JEP and adjust it to your current implementation I think it actually reflects the current implementation pretty well changes the XML file from being final and then extends that and deprecates some of the current ones and the rest of the API isn't fully defined yet it's been mostly organically grown that's good yeah so the original version from Alex in April is a visible request which is open against Jenkins code but if I understand correctly your current work is located elsewhere right? yes there's a current branch called work in progress and I'm doing some rebasing so I kept it out of the current pull request not to generate a lot of notifications mm-hmm so it's this one Jenkins and he's working progress right this one mm-hmm yeah thanks for the link we do currently use plugins so do target pluggable storage implementation as libraries right now it's all in there and mm-hmm I started testing it and figured out I'll see how the approach works first and then I'll do the packaging okay mm-hmm it's perfectly fine it's nice to see that this story is going forward and yeah I believe that SQL implementation will be already something really interesting to people to try at least I definitely wanted to try it QBFI yes it will be a huge addition to projects like Jenkins Sets because they could use this implementation mm-hmm and yeah currently doing it any help with the story review testing whatever feedback when I get a little bit further I'm probably going to need some help with defining same APIs the current ones are not in a great state and well it's sort of the same APIs on different levels and it's starting to get messy mm-hmm yeah so we can polish that and in Jenkins currently there is an opportunity to get better annotations to APIs so this is a feature which was introduced by JSC last spring and it allows to integrate some APIs to Jenkins core if needed and to have opportunity to change it later so for example if you want to introduce configuration storage as better thing you could use that so even if the APIs are messy it's something we could start from oh yeah restrict better thanks Jason okay so that's great thanks for working on it any questions to Alex? okay no questions so yeah when you're ready to show a demo at some point just let us know so that I think it would be interesting to see together the venue eventually okay and the last topic is a formal Jenkins idea so Carlos has already started presenting this bit so effectively we have blog post on Jenkins Ion which describes the formal Jenkins master's research so this is a diagram which Carlos has already presented so effectively you can just create Jenkins container or Kubernetes support with all services run them in a single shot and then just terminate it to the build and yeah plugable storage we are working on IHS so that the data won't be lost so this is a current proof of concept and we already have some tooling used there so yeah of course we use docker there is Jenkins file runner this is a project originally started by TK now there is twin forks so this one is what we use now but there is another one which is official one is Jenkins Jenkins file runner so this one which we will need to adopt at some point but yeah and so far we use the previous one because it's a bit better from the performance and to be honest it's just the first implementation we have started using then there is configuration as code plugin you can implement self-configuration for Jenkins instance here by just running jcast plugin or by running system group scripts so the instance starts with all your environment pre-configured and in order to package this thing we use a custom word packageer it's a recently created tool also last spring so it effectively allows to define inputs in a kind of YAML and then it packages Jenkins and you just get ready to fly a word package or docker image and now Jenkins file runner image and yeah you just configure it by YAML and you get configuration with everything so this tool is what we use and it's also used by Jenkins team so with Jenkins work they announced some research work on Jenkins X 2.x which would also include single shot masters and yeah they use the same tool chain so we have one demo available here it's a kind of very very simple demo which just shows how it's integrated all together so Jenkins file runner you can just check out this repository and get the thing running you just pass Jenkins file and it executes pretty simple and Carlos has a more complex demo which includes plugable storage stories so Carlos if you're ready you could just present this demo yeah so what we have here is sort of thing right we have a Jenkins file yeah just something wrong with your scrolling maybe I was playing with it I'm just stash yeah I just don't know what things I needed to change we have a Jenkins file that uses a Kubernetes plugin and defines a container well it defines the general key container we need a special one for now that doesn't try to connect through HTTP because when these serverless Jenkins runs, Jenkins master runs there's no HTTP created open port and then we use a Maven container just to run whatever maybe we want in there and on the actual need of the of the Jenkins file what we say is okay go into the maven now use credentials that are stored as a secret in Kubernetes and inside the Maven container just do a Maven build and archive the artifacts and this is going to send the artifacts into history that's a normal Jenkins file with some pieces that we needed to adapt for now but that would be a normal Jenkins file that you can run on these serverless or ephemeral Jenkins so we have a make file here that has a few things like creating secrets and copying the Jenkins file into Kubernetes as a config map these other two things are for debugging only we can increase at runtime both the login levels and change the as called configuration we import the GitHub secret and then we apply for the more Kubernetes configuration that's the most interesting piece so what this creates is a Kubernetes job that runs a specific service account in Kubernetes and has a container which is the Jenkins file runner this image is built by the custom work packages so it has all the plugins needed and also configuration as code and everything you want to bundle in your Jenkins master we can pass some system properties this is just to optimize for a faster startup time setting that it's important more things later generally you can set these properties in custom work packages as well if you need to simplify this config and we are mounting the Jenkins file into into a workspace so this container this Jenkins master has access to the Jenkins file we are also mounting the config as code a lot of properties for debugging if we want to change them at runtime that's all we expose a service just to expose the agent port for the Kubernetes plugin ball to connect back to the master and then the rest is just permissions a Jenkins service account that has permissions to do a bunch of things this all comes from Kubernetes plugin for Jenkins all the permissions needed to create paths, to list paths and things like that so if I run my Kubernetes just to clean up everything and start from scratch there was nothing in there and after this runs now we get the service of listens for the agent port for the agents to connect and a Kubernetes job and once this Kubernetes job launches a port already did this port is a Jenkins master that will read the Jenkins file and start executing and this will get the logs so what you see here are the logs from from your job basically this is just a normal job execution log so it's going through the node and it's waiting for the agent to connect and it's going to do the whole thing the slave is being created so we have now a master port and an agent port and the agent port is now running and we see here that the agent is provisioned and now it's just doing the git using the credentials provided by the Kubernetes secret and it's just running a maiden build like the loading app is in from the internet obviously in a real world you would have a local registry that you can use it built this jar file and it's archiving the artifacts and uploaded one artifact to S3 so and we see here that both the agent was terminated and the master was completed so now we don't have anything running both both are completed and the job is successful so it's done if we go and look at the results in S3 we have the jar file that was just uploaded a minute ago so this is a story in S3 and if we go to cloud watch we can see the Jenkins file runner log output so everything we saw there is there everything we saw in the standard output goes into cloud watch so it's this hit clone pipeline download and things from maybe all that comes here and this comes from the standard Kubernetes flu indeed deployment that will send all the standard output from your containers into cloud watch and this works this applies to multiple Kubernetes clouds and different backgrounds in different cloud providers so that's it we have questions yes so about log storage how does it pass annotations no for this node it does not this is just the standard output this is stripping all Jenkins console nodes you get the standard output message here and then you get different metadata like what container was running and what name space what was the full name labels of the full things like that but this is just a standard output so for Jenkins file runner what it sends to standard output is the annotation version of the build log so that's lost permanently if you care about that the full cloud watch logs plugin that sends directly to cloud watch from the Jenkins process itself does keep the annotations in a separate metadata field of course it's kind of useless because you can't interpret them without a running Jenkins service in fact without the particular running Jenkins service which is gone as soon as the build completes because it's not just base64 encoded serialized it's also encrypted with a key which was only present in that container unless we do some extra work to you know process those annotations into something like a JSON equivalent or something like that we could potentially keep them but right now it's not really possible thanks for the clarification technically we have everything embedded but here our problem that some data isn't being sent so for example if you publish genuine results now they do not go to the remote storage so we would need test storage to be finished same for pretty much every other storage story maybe the work being done by Alex could simplify it and help in some cases for XML based storages but so far some data so far we just rely on special plugins and on STD out so if you have a special plugin for example to send the data to test link then you get the data already but you don't get it by invoking the genuine plugin now so yeah there are a lot of things to do ahead but anyway it seems to be a nice prototype which shows how it could run so any comments about this story I have just a question from my side so we talk about the build stored into a persistent store we talk about logs probably going to CloudWatch or Elasticsearch but what about the builds who are in Q so some items which are stored in memory for the moment do we need do you think we need to persist them as well there is no Q in this one sorry what did you say there is nothing in Q well I mean when your build is not yet in the executor and then it's just in the Q in the memory you know so if those that doesn't happen here so as soon as you the responsibility for scheduling these would probably go to something like a separate service that has its own has its own mechanisms for deciding you know how to handle webhooks and when to schedule things but as soon as this Kubernetes batch job starts that is your build so there is no there is no Jenkins Q involved here yeah so if you open the speaker blog post so there is event handler which triggers this port it may be CLI also if needed so yeah this is just a container which executes everything internally but yeah this is how the things would work for single shot masters okay I can see I can see what you say there is nothing the Q since you executed darkly the Kubernetes and what about if you want to use an external agent because you don't want to to create for example some docker image inside your Kubernetes you would like to have this when you create docker builds you want to have this to an external port because you would like to reuse some old docker image so in this case if this slave is already I don't know easy to reach a queue isn't it so what would happen here is if your once the job is created in Kubernetes if for whatever reason this fails the port fails Kubernetes will schedule a new one so the job will continue to run if whatever an old dies for any reason so that's why there's no need to persist anything Kubernetes will manage the I mean it's basically once you create the job this is in the Kubernetes queue to launch a port that fulfills that job what about external agents so if you say I want in this Jenkins file I want a node with this label which obviously doesn't exist if you don't use the Kubernetes plugin and it's not going to be created this part will basically hang and wait or your your pipeline is going to be waiting for a node to fulfill that label you because the master is exposed the agent as I mentioned in the Kubernetes service you could connect external agents to that service you would have to figure out how to do the routing from your existing agent into Kubernetes but you could connect things externally I think we just say that there is no such thing as an external agent we want to define configuration for this pod so that it's able to use say SSH agents plug in to you know log in to some server and use that as an agent that's fine but that again that doesn't involve there being a Jenkins queue as soon as the build request that it do that it's going to SSH connection the whole notion of static agent out there somewhere that is then managed by Jenkins and allocated in different jobs between different builds I don't think that's really compatible with this mode of operation you could start an agent that connects to this pod from the outside it's just at the timing I mean whenever you launch the pod or Kubernetes launches the pod then you have to go and do something for this agent to connect to that pod yeah right I think that for a single short master it's not a big problem because there are lots of ways to connect a remote agent it may be SSH agents plug in maybe remote in Kafka plug-in or whatever the real problem start when you have multiple single short masters because obviously current Jenkins architecture doesn't synchronize access to agents from multiple masters and you would need to find implementation which somehow orchestrates it externally so it could be another cloud API for example some great engine plug-in probably it's a bad example if you talk about cloud native Jenkins but yeah you would need to still provision the agent and then you would get a kind of queue a queue of requests to retrieve these agents but yeah I think it's outside the scope for cloud native Jenkins MVP at least yeah I think we can just say that if for whatever reason the kind of agent that you want to run physically cannot be run in your Kubernetes cluster then it's just not something we support in this mode you can continue to use service mode Jenkins for those jobs okay sorry Fais thanks everyone yeah I will put some notes to the document later but technically you can do it practically it's not something we would expect to be popular in cloud native Jenkins installations okay any other questions there are some comments from Nikola about XML file if Nikola is not on the call we can just take it offline so anything else to discuss today let's wrap it up if anybody is going to NIS or think it will develop for I'm going I'd love to between Kambadyanis time yeah I guess we will have cloud native Jenkins update at the contributor summit in NIS so there will be contributor summit which is quite similar to what we have in the United States the final agenda is to be announced but I believe there will be some cloud native Jenkins discussions so if you are going to the conference make sure you also join the contributor summit it will be happy to compare all these workshops on October 23rd okay and I guess we'll submit to discuss the next meeting do we need it and if yes when I'm going to be on vacation the next two weeks it's going to be hard for me to join it for Jenkins work maybe but let's just sync on the e-mail yeah in my case I will be also on vacation and then I'm going to JSOC mentor summit so I believe that if somebody else wants to organize the meeting and present something let us know so we can help to get this meeting organized but it's generally a gender dreamer so any cloud native sync related topic is something we are more than welcome for discussion and if you want to present something you can just go to our mailing list and propose that just a second I will share my screen again so on Jenkins IOS site there is Jenkins IOS 6 and he's a I call me Jenkins and he's a mailing list link so you just click this link and here all discussions we have in this seat now so if you want to discuss something propose anything provide feedback about what you've heard today or about what you've heard before about cognitive Jenkins just drop an e-mail here and we will answer that and we will be happy to schedule any discussions on demand and I guess that's it so I'll I'm going to stop the broadcast thanks to everybody who participated in the meeting or who watched the meeting on YouTube if you have any questions yes mailing list or github chat and we will be able to discuss the topics today thank you thank you too I'm stopping the broadcast