 Hello everyone, welcome to the Jenkins Weekly Infrastructure Team Meeting. We are the 1st of August 2023. So today, round table, we have myself, Damien Duportal, Hervé Le Meur, Marc White, Bruno Varton, Kevin Martin, Stéphane Merle, six of us are there. So I hope everyone is doing good and let's get started. So the weekly 2.417 is out. At least the package and the Docker image will be published soon. I've created the tag and the Docker image soon. I just had a health attack before because when I published the tag, I didn't saw the updated version on Jenkins.io. I believe it's the Fastly cache unless I missed something, but there might be something to be careful about. No, Jenkins.io hasn't, the changelog hasn't been merged yet, we're a little bit behind there. The thing is, I took that the page, there is a Fastly uncache during the packaging step in the release process that should update the version here on the download page. I, if there's no changelog entry, so if you, well, at least my experience has been, unless there's a page regeneration event like a commit that arrives on this site, we don't get new content that they can update. Okay, that explains. Okay, so it's out and we are waiting for the changelog as usual, as usual. Right, is there anything else on that release? Oh, okay. Announcement, we had a security release last week. It was announced Tuesday, so we already discussed that last during the meeting. We had a combination of core release, core security releases for both LTS and weekly last week. Just a reminder, so that's why we had to cancel last week's weekly release because it was part of that security release. And next week we have the weekly meeting, but in two weeks, if I'm not mistaken, that will be the 15th of August, which is not the working day in front and some European countries. So my proposal is that we cancel the 15th of August meeting. Is that okay for everyone? Yes. No objection? Yes, no objection. I'll be out of the office, not for assumption day, but I'm out of the office. The French people here cannot object because they won't be walking, of course. Weekly meeting cancelled, but I assume we will still have a weekly release as usual. Is that correct? Still up? Yes, yes, but that means Kevin will have to deal with merging the change log and adapting it because I won't be available. Just in case, Kevin, I will be around because I'm not expected to work for my day-to-day service job, but I will have a task to run on the internet. I will have my machine, so I will be around if you have an issue or need any help that day. You can absolutely ask for help if anything is wrong or if you have questions for that day. No worries on this one. I will be in front of the computer, that's why. Sounds good, Daniel. Thanks, I appreciate it. I think it should be okay merging and everything like that, but yeah, I'll keep that in mind for sure, but hopefully you enjoy your time. Cool, thanks. At least I will need to be there for creating the tag on the docker release unless someone is able to automate that until then, but no emergency. That's all for announcement for me. Do you have something else on announcement folks? Nope, okay. Let's have a look at the upcoming calendar. So the version 2.418 Jenkins next week will be the 8th of August. I already forgot the next LTS, as usual. Cool, let me search. You don't need to search 2.414.1, releasing 23rd of August. So release candidate 9th of August. Let's see, August, I didn't. Release is 23 of August. Did I catch correctly the version number? You did, exactly, 2.414.1. Thanks, Marc. Something else about upcoming release? No, the security backports are already done, and there was just a regression reported today in the UI that I'm not sure if a backport will be done or not, but certainly we're watching actively for any surprises or bugs that might justify backports. Cool, thanks Marc for the detail. No plan security release and no next major event as far as I can tell. Anything else on the calendar for you folks? Cool. Okay, so let's jump on the topics about the tasks that we were able to finish. I haven't had time to sort or classify the items, but let me know that first we had a bunch of issues where a contributor failed to log in to Artifactory or to manually release their plugin, and they received a 401 HTTP error, meaning no authentication is either not provided or wrong. And the symptoms are the same for everyone, including our Marc Wait here. When you log in to AccountStream, Kinsayo, or LDAP, or JIRA, or any services, it works. So it's not you, you're not crazy, you didn't misconfigure your Maven settings. It's because there is an issue that we still don't know the root cause, most probably a change in the Artifactory configuration that we are not aware of. But Artifactory tries to compare the password, the LDAP password you are passing as credential, to a local password, which is either empty or a generated random value. We don't know why this happens. So we are working, there has been a new issue. We had a lot of these issues right now. We have a bus factor in order to solve that problem, you need an Artifactory administrator. So Daniel Beck and I are administrator for sure. So I'm the bus factor here mainly, given the low availability of Daniel. The trick is there is a setting on the user that has to be set to disable internal password, which forced Artifactory to request password check on the LDAP. That was the default behavior. So I've opened an issue to track that. We have to contact GFrog to ask them why did it suddenly happen? Did they have a recommendation? Right now I'm fitting to changing when a user opened an issue. So if you have that issue, please open on the LDAP. I've started to search how to batch update the users. But the problem is any new user created have the setting disabled. So the batch won't fix on long term. Any new contributor joining Artifactory will have the same problem sooner or later. So we need to ask GFrog for a solution. Maybe that's a feature request. So we can define a default behavior. Maybe they have to roll back something. I don't know. But right now it's annoying. And we will have to treat the user one by one. So sorry for the inconvenience. If you have an emergency issue, please open an help desk issue. Is there any question, suggestion, clarification, objection on that topic? No, okay. So thanks for the person who triaged the issues. Next issue, plugin not found on plugin site. So we had a contributor who created a plugin and they were really eager to see that published on plugins, which happened after two or three hours. The reason is when you publish a new plugin, you have to wait for the build to kick. The build runs every three or four hours, as far as I can tell. So their plugin has been is now visible. Thanks to the manual built trigger by Alex. Thanks, Alex. No more issue on that topic. It wasn't immediately done by the previous build because there has been a 502 error during the plugin generation. We have to diagnose that part. An issue has been opened by Alex on that topic. So thanks for that. So the user problem has been solved. That's why the issue is closed. And we'll have to work on the HTTP error. A long running issue, artifact caching proxy is unreliable. So thanks, Basile and Dere for completing that task. So now the acceptance test harness is using the ACP caching system. And we haven't seen any regression due to that system. So thanks for that work. That confirms that the network changes we did are not suffering the overlap issue because we haven't seen any TCP connection error on Azure since at least a few weeks. Any questions so far? Objection clarification? Nope. Okay. So let's continue. Stefan, can you give us a head up on the not toleration and taint for IRM CPU not pool on Azure? Yes. We did create a new not pool with a taint and not selector enable to make sure that no non IRM pods were scheduled on that not pool. It went smoothly and we checked that everything is working fine with Javadoc that we moved from the old IRM 64 not pool to the new one with the taint and it worked nicely. We also had to deal with Datadog and Falco to make sure that that pool got those two demon sets and we just had to use the taint toleration toward the toleration for the pods to be able to spawn on that. It's working fine and I just added and move migrates the reports dot Jenkins dot IO from Intel to that not pool IRM 64 as it was also an engineering pod so it was okay. That one is another issue so no worries. I'm not sure there's an issue. There is another issue for migrating workloads these are two separate topics. So thanks for the work two additions. So as Stefan said we had to update the demon sets that we deploy Datadog and Falco. So we have agents of these services running on the new machines even despite the taint supply to the new not pool. Nice thing is that all the Azure internal components you see the cube proxy here but there is a bunch of Azure services used to mount persistent volumes. All of these systems were already working despite the taint so it looks like they really did their their homework properly and their managed service are really working as expected. That's really nice on AKS. Please note that's not the case on Amazon so that's why I'm I mean how in the with the user experience in Azure I'm really happy with AKS so far. And the last one the funny one is that Datadog has a second components named cluster agents which is an highly available service that we run on each cluster and that one has been running one of its replicas inside IRM while the other replica was on Intel for month which wasn't a problem that showed that everything was working properly but as an additional proof that Stefan did his homework properly with the taint now that component is fully working on Intel as expected. So thanks for the work Stefan, thanks for the reporting on that issue. Next issue, Jenkins server enabled to download plugin from the updates Jenkins site or website so I guess that was a user level network issue. Yes okay so that person was facing issue for accessing one of the download mirror they were redirected to. So thanks Mark for completing the issue there wasn't anything on our home but at least we had to check that the mirror wasn't done which is not the case and our redirector wasn't done either which is not the case. We haven't heard back from the user but there is nothing else we can do they have to contact the ARCHON university which was the mirror because ASER the network is denying requests to that server or the university denied requests from that network. Well thanks for reporting sorry about the inconvenience you have to contact them or change the mirror. Next we had 88 build failing due to denied outbound requests during tests so that one was an old issue I forgot to close two weeks ago so I closed it. We are allowing external requests to the internet from the Jenkins CI agents only for outbound HTTPS and now HKP protocol which is used to retrieve public GPG key which is done by a lot of the Docker files inside the ATH and the request from them. We finished the reporting on the last miles of the Kubernetes 1.25 update so the issue 1.26 for Kubernetes update has been created. I need to do a last path but most of the changes we did on the two past Kubernetes upgrade have been reported to the task list for the upcoming 1.26. The deadline for 1.26 will be October so we have a bit of time ideally if we could target under August or September that will be better. Any questions so far? Clarification? Objection? Okay I'm continuing. We had three account issues so person unable to have their account. Some of these requests were due to either we have two main issues on account Jenkins IO today if I exclude the people who tried their Jenkins account inside our platform. First we have the person who had an old account they didn't use which was part of the security problem with the LDAP in 2021 and this account as part of the security response had their email disabled and set to the username so when they try to reset their password they cannot. It requires a manual intervention. For this user's reminder, don't blindly change the email immediately. Check that the user isn't the plugin maintainer or did not already worked with Gira. Most of the time they already commented on Gira so you can confirm that the email is the proper one. Yes Mark? Okay so just for precision when we had that account outage we did not alter anyone's email address. What the problem was they had set it themselves to that flawed address and when we attempted to do the notification they could not be notified because we had no email address for them so we actually did not reset anyone's email address as far as I know when we were dealing with that incident in 2021. Okay so that might have been a 2020 incident then. One of the LDAP security incidents before I was able to act on that area Olivier did a batch on the LDAP to remove this email. Oh okay so I didn't remember that but all right. So I have the date wrong you're correct that's not the 2021 all of these accounts haven't been used since 2019 or 2018 in that case so that's why. Okay but thanks a good point Mark and so probably one the takeaway is when you see such an account without a valid email double check the account on Gira because this user most of the time connected to Gira a few years ago and Gira keeps a proxy user with the proper email or you can also use artifactory or all let's say all GitHub issue you can find the user but don't blindly use the email asked by the user to avoid any unwanted takeover. Well and thanks for your point on Gira as a cached copy I had forgotten about that that's a really good place to check if if they don't have an accurate email address or they don't have a useful email address on accounts.jankins.io but they do on Gira like you said we can go to that one and use it and trust that that was their address at one point in the past good thanks and the other kind of issues are users that are trying to create an account from a public IP aggressive public IP which is in our deny list so they are classified as spam and we can see these errors inside our data dog monitor so in these cases I tend to create the accounts manually I don't know if a user really are really doing what they expect but yeah that doesn't kill. Any question objection clarification oh okay cool. I closed us non-planned a request from a user who reported an invalid certificate for that's the generic domain name we use as the main entry point for the public IPv4 on our public cluster. All of the services are consumed with a domain name such as get jankins.io which is most of the time is a C name to that sub domain name there is no reason for asking for a certificate because there is no reason for accessing that's domain name so I've asked the user never answered one of the most probable reasons is that this user might be running inside an organization with let's say some kind of appliance of security appliances that tend to not respect the HTTP host protocol headers most of the time they for instance they are case sensitive while they shouldn't as far as the RFC which when they have the CNM redirection breaks the HTTP redirections most of the time or worse it's their DNS that doesn't follow RFC in any case there is nothing we can do and we shouldn't publish any service behind that domain name so since we didn't have a use case because there might be a use case that will require that in the future I'm not an absolute person but right now without any problem statement I decided to close it as a not planned and no action required from us if there is an objection or if I forgot something don't hesitate we can reopen with a problem statement and we might work on this is there any question need for clarification on this one no okay so let's continue with the work in progress issues every can you give us a status on the migration of the update center updates to another cloud and AWS as it is today we discussed it together and we came to the conclusion we should use the mirror bit charts so we get an apache to server and mirror system already include we want to use an apache to server and not engines for example because the update center is generating a lot of access configuration file we have to keep so the goal here is to mount these the access into the apache to services and add the redirection HTTP redirection to the bucket on cloud flare we might have to put the main name with cloud flare to be able to use it for enabling public token buckets but it remain to be we have to check that okay be used for redirecting to cloud flare hosted error to buckets so reminder that means each time a Jenkins server will request the update center json like today on updates Jenkins ios slash whatever depending on the URL you are eating you might have a first redirection to the json file itself and with that new architecture once the request goes directly to request the json file as survey stated you will be redirected to a cloud flare public bucket that will be on another domain name but still HTTPS valid that will take care of handling the agress download we will start with one or two buckets which of course won't be stored inside our infrastructure so we don't have to pay for the outbound bandwidth and we benefit from cloud flare infrastructure the second reason of that additional redirection is that then we will be able to set up additional mirrors cloud flare or something else in areas where the network require proximity such as in behind the great firewall of china cloud flare for instance allows us to create additional buckets that could be on the china network that one will be a great help for our remote users is there any other question on that topic or is that everything is clear and we can continue proceeding on that no okay let's continue then thanks survey for the report stefan your turn applications oh yeah you were right there is an issue um yes that's that's uh all the um the services that may uh take advantage of running on an higher m64 in order to to save some cash and uh i did this morning uh this afternoon sorry migrate reports dot Jenkins dot io and it went well yes let's see that a few to look in and to check cool um i believe uh wiki will be the next candidate oh no i'm not sure i would have to check that so we have just a note we have two kinds of services the one that like javadok and reports that already consume on a third party docker image such as the official ngenix image for these two this image on the docker already provide iron 64 declination so for this one it's easy we can just change the toleration selector and that works out of the box for images such as wiki we build these images ourselves so that means we will need to update our docker image build process to support multi architecture if we want them to work on iron 64 yes i i mentioned wiki stefan because i was thinking about ngenix because it's built on top of the official ngenix image so we may be kind of easy to have also then i am yes i i remember that so that's the idea so thanks stefan so you will need to find the other services that we don't that are using images uh we know that about cluster agent for instance yes um but yeah i don't have anything else to add on that topic is there any clarification question on this one okay so let's continue with jenkin ci failing for jenkin's plugins so the contributor answered after our request so it looks like in their case the plugin repository they are using is an unexpected one because it's not an original repository it's forked from the company's organization initially they forked their repository to the jenkin ci organization that pattern looks like to be of consequences inside ci in jenkin ci in the way we configure ci jenkin ci job scanning when the contributor sumo anema despite being an administrator of that fork inside jenkin ci when he creates a pull request i forgot the pull request which is not from a fork of him then is not considered as a maintainer so any change to the jenkin file is not taken into account in the pull request build that's a security concern that's a security mechanism on building in jenkin ci but here we are on the edge case we might have to update the job configuration to cover this case but it's not an easy one to solve so at least they were able to have to use the method provided by team when you have such case the contributor of the plugin should open the pull request from their own fork not from the either the original external repository neither from a branch on the jenkin ci plugin repository that might be a bug on the github organization scanning plugin but that also could and most probably is a misconfiguration on ci jenkin ci changing this configuration is not uh uneventful because it will trigger a reload and the scanning of all plugins we check two hours and most probably it's the api rate limit with the consequence of booking all those of builds so if you do this please don't do it during a bomb build or ath build so the user described their problem so we will need to find a solution or tell them to not use it and then update the documentation for developers uh if we cannot have an easy solution uh bruno and uh and kevin we will let you know the outcome of that issue in the future so i'm mentioning this to you because you have it under your radar no expectation from you or anyone on the documentation team right now uh but next week we should see more clearly the next steps for this one is there any question objection clarification for this one if we cannot find the proper configuration on ci jenkins io and if not a bug on the ci scanning plugin then we'll have to document this for developers next issue plugin site build commonly fail on infra ci when accessing plugin jenkins io so i haven't had time to look on this one that has been opened by alex we have to to check the build logs and see what caused the error um check the the login Jenkins because it keeps only five builds and okay uh is the build kept because when we see that kind of when i checked before it wasn't really okay so can you add a note on the issue to tell alex if you see another error like this one if you can keep the build so we it won't be deleted by the builder cleaner um i'm not totally sure where to look in terms of access logs to find these errors because plugin jenkins ci is inside fastly uh so i'm not sure if they have access log for that and i'm not sure if it's using fastly because it's a json file i'm not sure if it's static file cached by fastly or if it does it does directly it's the plugins.origin.genkins.io back-end service inside our cluster if it's inside our cluster we will be able to uh to check the access log at the ingress level if it's in fastly i don't know so we have to work on this um it's when it's when building on this one instance that's uh request ardent it's yam who's calling when you were building the plugin inside infra on ci it's calling yarn is calling the there is a there is a gs file called a gs script called by yam when building which uh requests the plugin site for more instance okay i'm not sure to see why it would have an influence you were i i thought you were saying that if the quest came from more cluster or instance we might have more visibility on them that's why okay yeah it now it's more the server side that was uh if we if there has been an http502 issued it's a server side error so we should have an access log somewhere with an error message to point why did it fail to answer the request and that if the error would have been a four or something that would have been client side and in that case yarn would have been the cause but that's not the case and the error appears intermittently so is it fastly issued is it on house i don't know but we need more logs in order to diagnose this one i tried to run a build with plugin.origin that Jenkins.io but it failed because uh plugin.origin.io is hyper restricted yes uh why do you want to use plugins origin Jenkins.io for that it was a test it was a test okay okay that we in any case we could use plugin.origin. Jenkins.io because it's not a big bandwidth it's just from a build but yeah it wants to to use what the end users see and you don't want to create specificities like this one especially when using a cd and in front of your website um yeah so uh because if we have the json file served served on origin and it's a generated file then in that case that means we can have it somewhere else and consume it directly instead of having to use it through the website if you have to do that kind of bypass um yeah do you think we could create a data dog monitor to alert us when that build fails since infra ci has data dog plugin installed and set up and is streaming its data to data dog easy because at least we will be alerted and that will uh that will at least allow us to go and manually kick the build so and user won't be waiting too much for their plugins what do you think like the account app alert i've put in place we can easily target uh patterning uh specific service logs okay i'm not sure if it's a log that we want here it's more build status in the log when the pattern is matching the log of the plugin sites the build of the plugin site we can then create an alert from that okay makes sense but to be quite honest i will prefer a more generic monitor that say a the build failed because the main branch of that one should always be successful if it fails that should be a call for action from the server team but yes eventually having a second monitor with the log pattern will help for that specific help desk issue any question clarification okay so next topic hh builds are become a responsive so that's um we changed one week ago the kind of instances uh used by the hh builds inside saying ti jenkin sayo we switch from spot to non-spot instances which which are way more expensive but the benefit here is that that should fix most of the unstabilities especially when the dreadful message uh i don't remember but yeah could not connect to a name of the agent that message most of the time is because the agent was reclaimed by the spot instance every and i did discovered and so and we reported how to check if a machine was evicted due to spot reclaim which is a reason for the build the job being unstable um what we saw is that there are there is a retry it's retry two time for each steps but most of the time the probability of spot eviction of the second v8 spawn agent is really high so i've asked if we see better stability but i haven't checked myself we should check the history of the builds and also the cost increase per week to see the impact because most probably we will have to keep the hh agents using non-spot but in that case we will have to either change the instance type to have the proper instance do we need instance with that size do we need something with the different cost we need to do the reanalysis so i propose that we start with metrics and based on the metric we take a decision on the upcoming week that we will report on the issue most probably we will have to keep the non-spot the non-spot instance is there someone who want to work on this one and is there a need for clarification as well okay i will take it then we have to measure success build success rate success rate without spot and cost impact because right now as far as i remember the the cost was 10 time cheaper using spots so right i would be inclined to say let's retry four or five time until it work but not sure if it will change the global build stability it will increase most probably the build time when you are in a spot spot eviction time window but at least the build will continue until it works maybe we will have to wait a bit more time next issue assess artifactory bandwidth reduction option so there will be tomorrow we have to announce officially but the the time window has been fixed by mark and high because will be bus available so tomorrow Wednesday the second of august at 2 30 p.m utc there will be a brown out of more or less one hour on artifactory where we will disable and remove any virtual repository pointing to the maven repo one artifacts the consequence of that for us will be that the maven builds inside cnc in saio inside a g it abaction for the cd process and for any contributor will fall back when not finding an artifact directly on the maven central repository the benefit of that operation is that we will be able to test if we can build most of our biggest project without having to mirror these artifacts the brown out is a one hour session and then we will roll back and we will conclude if we saw catastrophic failures or if it worked as expected jfrog last week during our meeting validated that principle because that that should allow us to gain at least 10 to 12 terabyte of data not consume but that instance so that will help a lot and jfrog conclude that if that's work and if we can persist that setting they will be okay for us with our bandwidth consumption after that change that mean no need to go further unless they require that but right now they were quite happy with that change and as they told us if that is implemented on the low and take its premise then that should be okay that doesn't mean we don't have to track the other mirrored repository because most of these mirrored repository have the artifact store sooner later on the central repository such as gig it for instance but at least this one is far far away from the other so the brown out that will be the first edition and then we will make conclusion most probably that brown out if it takes his premises we will plan another brown out in two weeks that will last at least one full day and finally if everything goes as expected we will plan persisting that's that setting in the future is there any question on this one okay please note that the immediate consequence is that there will be soon later today or tomorrow morning last last deadline a change in the settings xml of our cagin kinsayo acp systems the builds will exclude acp from being used when trying to reach to download artifact from what any repository which id is central that should not have any impact now but during the brown out it will be heavily used that means any artifact downloaded from the maven central and not from our artifactory inside xml kinsayo won't be cached at all and they will be redownload every time that might increase the bit some of the builds we will assess the consequences of that later the priority is to decrease the outbound bandwidth on the artifactory and then we will think about a way to cache this artifact either using acp or another method no more acp for central artifactory for now any question clarification objection okay next step just a minute i heard the word noise in my house that's the cat who blocks something sorry i had me worried uh next issue done great to http when there is no trailing slash so our security officer um following a security report communicated to us that when you try to reach jenkin sayo with a suffix slash something there is a two http redirection the first one that goes to http which then redirect itself to https again we don't want https to http done great connection uh the setting is in fastly i don't know much more so we need to fix this one it's not emergency but that's important enough in terms of security so we should find a solution for this one it's not blocking anyone it's just security reporting so we have to look on the fastly setups and most probably change uh an attribute on the terraform fastly project any question okay uh any volunteer is uh welcome to check this um another issue during the security release that one should be easy to fix uh since the migration of the trusted NCI controller from one VM to another the new VM has uh not a localist name of controller for both machines which leads to human error and mistakes because you think you are performing an action on one of the machine while in fact you are connected to the other machine so the goal of that issue is to help the human operator that we are and that security people are to avoid uh this problem so at least on the the common prompt we want to have the full qualified name that will be controller that trusted that c i jenkin sayo or private controller c i jenkin sayo and ideally if we could add uh custom colors at least for the root user that will help so that might be a bit of linux and a bit of puppet to purchase the linux setup uh that's a quick one uh but that will help a lot the security teams is there any question or people who want to try this one i will probably try yeah okay it's okay for you absolutely stefan is volunteer uh disallow issue creation in event project type that's me being late it's a gira admin uh a step as far as i can tell i haven't looked uh on this one in detail so no expectation here i need to take it or any other gira ad aws decrease cost for summer 2023 uh that's an epic top level issue that we keep on while we are walking that's our top priority uh half most of this work is done by rv with the date center the rest is taken by me around the pkg machine migrating to azure uh we'll discuss about that on the boom 2022 another linux foundation issue mark is taking this he's exchanging for tickets with the linux foundation so no action required from the team but we keep it on the milestones matomo github repo no member of the team was able to work on this so let's delay ubuntu 22 upgrade campaign uh for that one i've started checking pkg genkin sayo which is still on the same virtual machine that we want to get rid of as update center um most probably we will start with only an apache service based on the mirror orbit charts that rv mentioned earlier which serve static files and that's all that service pkg origin genkin sayo uh as the package indexes which is then procced by the fastly cdn so that service has a low usage but is really really important and so the goal is to move it from aws to azure cluster so then we should be able to not depend on a remote virtual machine for generating packages during the core releases so work in progress on this one the most complicated part for me is to find every core release steps that interact with that machine and there are lots not mentioning that also the plugin updates is also interacting with that machine sometimes for update center sometimes for package sometimes for both so right now i'm trying to access the different elements and i will report with diagram here finally last one no work was done on remove ip restriction on bounce or migrate to vpn i've created as we will see with the issue to try each the issue about third ci cleanup and the whole network cleanup we are inside that area for now i won't be able to work and i don't think none of us will be able so i propose that this issue is delayed of at least two weeks no objection okay i've i've bored everyone so now let's have a look at the issue to try age do we have new important issues do you have topics that you want to mention new issue new things okay so i'm selecting try age issues yeah we have a lot of try age issue i will only look at the last one um artifactory the one i mentioned at the beginning of the meeting so what should we do for the artifactory 401 issues