 nice to meet you sir again sorry hello am I audible yes you are okay so previously I asked the same question with Martin sir I just wanted to I just wanted to show you my UI vision because I wanted to discuss about the UI how it's going to implement excuse me let's start from the process specific questions and then yeah then we will switch to project specific ones if you don't mind okay thank you sure you're asking me right I it was it is related to the project itself yeah sorry so let's just spend some time on whatever organizational questions and we go to the project specific discussion okay okay okay doesn't really have any organizational questions okay if not then we can do project ideas etc okay may I start if you let's continue okay so I wanted to know the exact UI pattern of the project artifact promotion plugin so I'll share my screen okay so I just wanted to ask that this new item section that we see here when the promoted the new plugin is implemented then where would it be placed like like will it be placed alongside these projects freestyle projects there will be something like promotion promotion of artifacts would it be placed at this particular place or like the job we were actually creating the job yeah a new job right so would it be placed alongside these projects or I don't yes so it's a part of the design you can do it in different ways okay my personal understanding was that it would be just a subtype of pipeline so yeah it would appear in this list unless you deliberately filter it out okay wouldn't it just be a property of a job pipeline not a special type well it would be a property in terms of configuration how you somebody is tapping their microphone okay I'll mute everybody so there are two parts of the question one is how it's being defined and another one how it's being presented in general so yeah it would be great if you could define promotions and promotion logic directly in pipeline definitions or in declarative pipeline and special sections but in my understanding it would still appear as a separate job type on the management page since it is being mentioned that there should be a common standalone plugin right so it should stay in this case and next question would be sir like whenever I go inside a certain project and go inside a particular build right so inside the configure section we have this section like promote builds when this section this would be migrated to that job right the new job as execution or as definition because so you can still define this promotion process inside your pipeline if you want like if I want to make okay okay all right okay I can also define this in the pipeline or I can make a common job that was what you meant yeah I'm sure I get your question so this thing the promote builds when this section would be migrated to the new job if I'm not wrong so you can define it in both ways depending on your preference you can define it inside an existing pipeline as a property like claus suggested or is a special clause for example in the clarity pipeline or you can define it in external job so it's a matter of definition and you rather expect you as a student to come up this proposal the both options are technically possible and both options many find some users so it's up to you to see how you would implement if you both are fine either or both are fine so right either like both of the options are fine like I can implement any of them yes um Oleg can can you elaborate on the the reason to create a new type there so there is no reason to create a new type of job there is a reason to have promotions as separate job entities so like we were discussing yesterday I believe promotions currently are not well mapped in the Jenkins job architecture so what we would like is we would like to have promotions as separate job entities which map the current structure so for example there may be a folder or multi-branch pipeline inside this entity there is a common job definition so you don't use hax as item groups which are represented as job properties but this is an architectural thing if we talk about job types etc now I do not think that it has to be a new job type because yeah if you talk about promotion support for pipeline it would be rather expected that the promotion is also pipeline it's a probability that you kind of define in there either either as an automatic promotion or a manual promotion yeah right but inside it will be still a pipeline code it would be still executed as a separate job entity or maybe as a separate run entity depending upon how you implement it but yeah it would be just a part of the architecture architecture thing definition as I tried to say it may be different so definition of promotional logic it may be inside your original job it may be defined in external job it depends on how you implement it I could I could see the view the view part as a separate entity it's kind of because it goes across the whole the whole Jenkins whereas the execution would be an extension of the current architecture does that make sense maybe so I think that we should really plan a separate meeting for this specific project so that we could deep dive in the architecture so maybe if you guys have some time we should take it offline and schedule a meeting like we do for other projects that's right that would be great so that would be a great help so my next question is sir like as your guidance I went I read the release plugin since it's compatible with the pipeline so I just read through all the code base and what I found is one in one of those classes inside the release plugin like the release promotion condition.java in this class what it does is it extends a promotion condition which is which happens to be the class of promoted build plugin so this means that most of the classes which are inside the promotion build are compatible with the pipeline but there are some which are not so how do I distinguish between those which are compatible to those which are not so the rule of thumb is that if you see classes like abstract project or abstract build in the class you can expect that this class or this method is not compatible with the pipeline okay like promotion.java promotion process.java job property these are not compatible because they've got abstract classes like abstract builds that's why am I correct okay that's right okay sir okay next question is like if abstract builds are not supported by pipelines if abstract projects are not supported by pipelines then what are the what is the actual project type which is compatible with pipeline which is supported by pipeline that's a tricky thing because pipeline is a separate project type so yeah it may be useful for the students as well probably I'll just share my screen if you don't mind okay yeah so probably yeah let's just try to timebox a bit okay so job is kind of top-level entity for executable types so job has methods like so it generally contains runs and this runs somewhere how they're there are many implementations of jobs if you see here you may see that there is abstract project which is a part of job so it's one of implementations of jobs another implementation is pipeline it's on the same three level there are the project types for example inheritance project or other projects which inherit job type so what it means that these project types are really independent and these are a part of the tree all they shade is this job interface and also all classes up to the hierarchy so you may see that job implement abstract items so that it may be a part of the item tree inside Jenkins that is actionable so it may contain actions and then may persist actions and yeah that's it but job is the basic definition which just yeah it just applies for every job type in Jenkins same for runs for run so that is the basic lies full run which is exactly the same okay so for which exact build type would be supported from in the pipeline of the job type pipeline because pipeline is a separate job type okay yeah if you see a use case for supporting jobs in general so for example supporting freestyle job as a promotion logic for pipeline then you can definitely do that in your proposal if you feel it's reasonable but yeah our expectation would be that at least pipeline promotion logic supported okay okay so next question is there's in the in the project proposal it's mentioned it has mentioned that detach detach the existing promotion condition extension point and implementations from promoted builds plugin what does that mean so like getting rid of all the existing extension point of the plugin well so what it means is it being a common Jenkins architecture thing is that there are many epi plugins so plugins which provide shared functionality and being reused by other plugins for example promoted builds already includes extension points for defining promotion conditions for example manual promotion condition when somebody clicks a button or depends on whatever other event in Jenkins there are maybe a dozen of implementations for this promotion condition so the idea of this line is that if you create a epi plugin and if you move reusable functionality out of promoted builds then other plugins can consume this functionality without including the rest of the promoted builds code so the idea of this value that you just detach the plugins and effectively reuse the usable components of existing plugin okay so for the for the for the code part which is to be implemented by the pipeline I should mention the extension point in it right like in a file the class which is along with the at the extension point it is read by the pipeline that's the exact mechanism of how pipeline works right sir i'm sorry okay okay okay so last question sir like you mentioned in your vision in your vision that in the properties section we must include we can include the environment like where where the build is going to be deployed and can we do that by using the existing fingerprint that is being used in the promotion build already yes okay what do you mean by deployed in this in this case here meaning like certain builds after being promoted they would be deployed into some sort of the next stage they would be passed to the next stage so just to track where it is being deployed can we use fingerprinting that was my question yeah so fingerprinting can be used for tracing of any dependencies between builds and between any kinds of artifacts within Jenkins and you can use it here to track dependencies okay so last question last question is like what would like if i need to refactor the codes the classes which are not compatible to pipelines what would what should be my what should be my general approach what should be my general approach to refactor them so that i could make them compatible with the pipeline like abstract like i need to replace that with some other build type or like promotion job i need to change that what should be my general approach to refactoring those codes yeah so again general practice in Jenkins project promoted builds that we're interested to retain compatibility where possible so if you see that you need to perform significant refactoring there are ways to retain binary compatibility there are a special weak page which explains how to do that and if you work on any project and if you need to do a massive change it might be a page you want to start from so Jenkins binary compatibility i believe let's see i'm not sure hints or retaining backward compatibility so this would be one of the starting pages for for you because it references several Jenkins specific ways to retain compatibility for example when you change the data or when you remove a field or remove a method but yeah if you see any significant change apis it's definitely something which will require significant design and before doing that you see whether there are alternate options it doesn't apply to on this project it applies to all projects like in one of the tutorials in the plugin development i came across the fact that variable substitution that is not enabled by plugins so there was an idiom that was given so by including that it can be made compatible with both form of projects from from pipeline and a freestyle it mentioned something like that about the idiom of extension points i so would that be the exact tutorial not sure so yeah again when you have a specific question you may see that it's not really effective to ask them on the call because sometimes it's hard to express what you need and that's why we recommend using mainly please or get a chance for that because it may be more effective in many cases okay okay sir thank you sir thank you yeah so i propose to just take it offline because yeah we have limited time we have other students all right yeah so if we have some time after the call we can return back to that but yeah my proposal is really rotate to other questions and probably next time we will be just trying to turn box questions to five minutes and we will be rotating between students so that sure sir sure okay thank you thank you again let's see if we can discuss it a bit later so here's a can I can you show my screen right now okay so i'm not sure my screens wait a minute okay so there are some questions concerning was um AWS CFS port so basically in the cloud storage for example workspaces I found some really like confusing points of this idea that is there are some problems with AWS EFS support the first one is it only runs with EC2 instance so actually it is highly integrated with the existing AWS EC2 and the second one is I think AWS wants their user mount their EFS to EC2 actually when I when I create an EC2 instance and I only need to run one command line to mount those EFS to EC2 so if you want to Jenkins do the job for the user it might be much harder than the user's do itself so there might not be much uh benefits to the user if they have to use Jenkins yeah it depends on how you implement a EFS market because we talk about use case when you provision an agent an agent has EFS out of the box then yes you probably don't need to implement something specific but imagine you need to provision the EFS on demand within pipeline for example your pipeline reaches whatever massive testing state stage or coverage and it needs more data in such case you can use EFS APIs to get more data on your machine but an agent-addition model one of the use cases another use case is still the same EFS instances between machines which can be also done inside the work server workspace manager plugin so for example you need to pass workspace between multiple agents and you use EFS to implement it and yes okay you can continue yeah so I just provide my examples and again EFS is just an example so if you don't see much benefit of using EFS as a reference implementation you may think about other implementations because EFS was just provided as one of the examples which could be yes actually I have discovered something interesting during the exploration of the EFS so I think that it is doable for AWFFS and there are an existing EFS support that is an EC2 plugin so basically I think the most hard the hardest part of implementing the EFS support is I need to get access to the users access rights right I need to make my Jenkins as a user so I need the credit user user names or something like that to log into their console I think this is most the difficult part but it's already done in the EC2 plugins it's just in the EC2 plugin this is a widely used plugin so in this EC2 plugin it has already implemented the ways to get access to the console and then through that console we can send a request AWS and so that includes create the EC2 instance long-chain EC2 instance and maybe do other things that a normal user can do so basically if we really want to implement the EFS we can make use some of the code of the EC2 plugin I don't know if that is doable if you have any doable there are multiple ways to do that firstly you may want to declare dependency on EC2 plugin directly so as we discussed before there are extension points in Jenkins these extension points may be mandatory they may be optional so for example you can declare optional dependency on EC2 plugin with optional extension point implementations so that you can still use external workspace manager plugin without EC2 if needed or there may be other ways for example moving shared functionality from EC2 to another plugin like API plugin for example there is a plugin called AWS credentials which allows managing surprise AWS credentials so maybe something could be done like that for EFS just one of the examples so yeah if you really want to focus on EFS in your project then it's nice that you discover this EC2 plugin the recommendation is to think how you integrate the things with each other and it would be a great part of your project proposal if you explore this area and see how you do it together okay so actually this is another idea here is that I think that every if we want to support to different cloud service providers we need to offer an API plugin for those of all the for each of the the cloud providers I think that is a better practice better line because if any other plugins wants to use the API it might then just need to access the cloud providers so as you mentioned and there is a admc credential plugin so I think that might be really helpful for this job because if I can just use it and then I can use something like maybe a dummy console to send requests to the adb so that would be more doable for me if so basically I think that is a way so here's at least another project idea so and as you mentioned in the the original project project ideas I think there would be some cloud service providers that didn't consider like for example adb cloud is a very good cloud service provider in China so I think for as a Chinese I would like to implement something for the adb and for me I think the best way to support external workspace manager with a specific cloud service provider and better to first of all like I have to implement it on cloud service provider API plugin and then use like here log into the cloud service provider and then issues requests I think this would be a more doable design for for all the kind of things like the couple what we do in external workspace manager and log into the service provider what do you think it might be feasible design and if you want to do such thing you could also take a look at jclouds so what there is a project called Apache jclouds it's effectively a set of libraries for working with different cloud platforms using unified APIs I've listed the link in the chat so it might be one of items you would like to take a look and if you want to go with this unified APIs idea and so here since the discussion is over I would like to I want to solidate on which one we really want to do because in the proposals I'm not sure which one is better so if anyone can give me some suggestions like how do I gonna implement for example AWS EFS support in the external workspace manager and from the choices I mentioned above and if I set my piece maybe Alex or Martin would like to add something maybe anybody else I'm not very sure I did understand the question so can you like quickly repeat please so basically there is two ways to support AWS EFS support the first one is I do I write the code that I'm logging to the cloud service provider that is I become a user of the AWS and write this part of code in the external workspace manager code base and that's design but if we want to support multiple cloud providers the cloud base of the external workspace manager will become really large and the other solution is I first write a like AWS or some other cloud providers API plugin so that offers me a function to looking for other servers to act as a cloud user and then I can use those API plugins to send actual requests of some specific requests to the cloud providers and I think that is a more a clean design of it which one do you think is better yeah I also agree with you I think the second one is clear because you actually decoupled the logic from from the AWS plugin with the actual logging and credentials and stuff like that in another plugin but I'm wondering if you can integrate this somehow with what Oleg mentioned earlier with the AWS credentials plugin or something like this I'm not very sure if they can be integrated together somehow so so so that you don't need to actually write another plugin you can just rely on that plugin to to use the credentials for the AWS but but the main idea is that separating these I think it would be a better idea than just having the code inside the external workspace manager plugin yes for AWS EFS I might have already an existing plugins to use but for other cloud providers with Alibaba Mediathe or something like that I might need to write for a write different maybe I need to write different plugins or use the one the oracle to see what I'm not sure what is that but just as Oleg mentioned it looks like it can offer a uniform and API tools all the cloud service providers so I think that would be after investigation I can maybe clear it and clarify it in my proposals so here's another question so can I just write it those proposals in the cloud space a cloud storage support for external space so in this project ideas or do I need to write an individual project ideas I would say that you should write in the same document but maybe Oleg knows how this should be done best but you could write like your ideas and say that okay if we want additional cloud storage with different providers then we can perform we should write this plugin and then you write details about the plugin you would like to additionally implement that's what I would say so I would say it should be in the same document but I'm not very strict with this okay what do you think Oleg yeah I think that if you have such multiple ideas in your project proposal so what we need is to actually understand what would be your approach for example how you start the exploration how you make decisions for example you can say that during the community bonding and first coding phase you implement something generic and then you have a decision making point where you decide which option you proceed so as long as it's traceable as a project it's perfectly fine to have optional conditions in the proposal okay so maybe like if time permits I'm gonna do this sort of stuff yes right and I think that is a good part so it really helped me a lot because I really don't know which one to I don't know how and can I concretely ideas in my mind so today's so if anyone has gotten another questions please go off I'm done with my questions okay thank you so any other questions okay so Natasha let the call so yeah we're just in the chat if there is no questions we could probably close down okay so like can I ask a question if no one have any other okay yeah so probably I need to share my screen I hope it's visible oh yeah so like in the previous question one doubt was created regarding this point that whether by implementing this point can we you know discard both work spaces and builds or we can only just discard the builds only by using the information through fingerprinting so I think I and Martin are discussing and like we both like I was you know a little bit confused regarding this maybe I can give a bit of context and continue yeah so Oleg last session we have discussed also with Martin and we have suggested that so external workspace manager plugin is integrated with fingerprints so if you get the fingerprint for a specific workspace you have all the information you need to actually delete that workspace so Martin and I suggested to to make use of this fingerprints to be able to integrate with the discard plugin and be able to delete the workspace does it make sense what I'm saying yeah it definitely makes sense but I think like from you know getting the workspace path and all that we can also get some information about the builds like the build the build information right so like what my question is that uh can we just discard the builds using that information or uh we need to go through another path for discarding the builds uh which are going on in these multiple workspaces the the thing is I don't know how the discard plugin works but I would assume that you just give that plugin a path and uh yeah so yeah yeah so like I'll just give an overview that uh the delete method which is currently used by the log router what it what does it do is like it first of all phases the path of the root directory and by getting that root directory it uses some logic and it just discards the builds so what I thought is that by the by fingerprinting we can definitely discard workplaces but we can also get the path of its root directory and by using that we can uh you know just reach to that build and we can discard that build too yes so yeah so I think that here the main idea would be to discuss workspaces of course uh workspace um discarding builds not the main purpose of this plugin yes you can do that but if you talk about all workspace uh logic it would be only discarding workspaces and for builds it should be an hook so when something just decides to discard the build this process happens and then there will be location for discarding uh workspaces as we discussed before as a reasoner and then you will be discussing discarding workspaces as a part of a build to discard procedure but yeah if you talk about specific path for external workspace manager integration I would expect it to be managing only workspaces because the build definition will be handled on a higher level okay so like for a specific implementation with the external workspace manager uh you suggest of discarding only the workspaces uh not builds right now yeah it can be done but at a higher level right yes okay so I just include the uh so okay so like according to this point the workspaces will be discarded according to the user so I think this point is much more clear uh okay uh so Martin sir uh I think this comment can be resolved right right I wanted to make sure that the understanding was was correct um workspaces do not contain builds it's yeah yeah yeah it's build objects that have references to workspaces ultimately yeah okay so I just resolved okay and the one more point which I need to discuss with you Olexa is uh that uh uh okay yeah so your comment regarding to this feature which I mentioned uh according to me so uh so so actually uh what I meant by this is that uh I wanted to you know I had a feature like uh some type of recycle being uh you know where if uh if uh due to some user mistakes uh few of the builds get discarded which were important but and now if once they're discarded and if a particular user wants some information regarding that so you know by adding this feature it can be accessed so yeah uh so definitely I have written that if time permits and if all my other features gets completed then I will try to implement this so uh you know uh so like here you are uh just commented that uh it might be preferable to have it on a higher level outside this plugin so yeah definitely it's right but uh I have uh you know just added a few points regarding to this in the uh like in this extra feature uh part that uh uh maybe this uh can be uh you know worked for that uh one comment to discarding workspaces uh the current git plugin is using workspaces for triggering if uh you have wild cars there uh wouldn't that be a problem if you just got the workspace then it will trigger itself again oh yeah so like uh at that type uh workspaces would be needed but uh like what I am suggesting is like a feature like press bin you know where uh for some of us uh some details of the discarded builds or workspaces can be available for the user so like uh in any you know case of emergency or like due to some like due to any mistake if anyone wants any information regarding that then uh you know we can just provide the user that facility well actually um I don't know historically why it it needs the workspace for triggering if it's wildcarded but uh we probably need to be be looked at so we don't get into a a uh a continuing uh triggering and deleting workspaces etc uh okay yeah so like I'll just have a look on it the current git plugin will use the workspace for uh polling the or fetching from the remote server in order to figure out if there is a new commit for example uh and if you're and then it will also trigger if it doesn't have workspace so the first time you configure it it will actually uh restore a workspace so if you're then deleting it then basically you're keep an infinite loop of builds okay yeah yeah okay I just got that yeah okay so like I'll just have a look on it once more and uh oh yeah one more thing clarify with Oleg sir that he has commented here that why I mentioned the data structures point in the proposal so uh Oleg sir what I meant was that uh you know uh first of all you know uh yeah so uh like uh efficient if you give me some time to process your feedback oh yeah okay sure sure no problem because yeah I haven't taken a look at your proposal yet I'll try to do it tomorrow on Friday yeah I can guarantee that I'll try to give you the uh yeah so like uh sir like uh I'll you know uh just mention uh like there are a few changes needed in the proposal so like I'll just do it uh you know by by one or two days and then I'll you know just uh add a message in the guitar chat room of that GSOC build discover uh to uh you know for review so at that time uh you can review uh you know the latest proposal of mine okay and one more doubt that uh regardless of uh you know giving uh these proposals for review to the mentors in a specific guitar chat room where should I post my document link so that you know any mentors or you know is anyone from the community can uh give comments or or suggestions so yeah if you want to have to ask something specific to development the best way to do that is to ask in the jenki's developer my name is or if you have specific questions not related to your project directly the results jenki's guitar channel there is IRC channel so you can use these generic resources not to ask common questions it would be my recommendation if you want to have more people participating in the review yeah okay sure sir like I'll just post my link uh you know like after my final proposal uh you know after your one or two reviews and after all the changes I'll just post the link in the uh or two mailing list one will be the GSOC open for all mailing lists and second will be the developer of mailing list so that uh the mentors or the community you know can just go through the proposal and I can get more and more reviews exactly yeah sure thank you okay so are there any any other questions left yeah just not so we have a few minutes left so if there are any topics to discuss we have several mentors on the call maybe we can briefly do that so let's just pass it in my Gitterchats because yeah as I said in the chat I will think how to improve the communication at this point in meetings because yeah probably we need to somehow time box the questions and yeah we should really do the most of technical discussions in the mailing list on Gitterchannels because yeah now it may be complicated some people like the context so yeah it would be more efficient yeah sure and uh uh let's say one more request that so like if you you know just see that any other section needs to be presented in my proposal then please just inform me so that you know I can add that section like in my proposal I have taken care for most of the sections you know like my personal information and synopsis project description timelines and all that but and you know my working time commitments and all that but like if you know it's already 15 pages we don't really want to submit proposals in several editions okay I think that critical sessions are addressed there if you want to add something it's looking to do that but yeah as I said before usually proposals are from two to five pages so when you go beyond yeah most likely it's about additional information you put to appendix yeah okay yeah so like I'll just go through it once and you know I'll just remove the information which is not much more relevant to the so to that so that the size can be reduced okay so if there is no other questions thanks everybody for participating and again we have chats we have other public resources and see you there you okay see you bye