 Hi, my name is Itai, welcome to my talk, unveiling kittens, improving developer and maintainer velocity, which I'm going to describe Drip or kitten. First of all, let me assert that cats are a very important part of computer science. As you can see, Google's artificial brain learns to find cat videos that was done some time ago, if I remember correctly. Also, 50% of the traffic on the web is cat related, which is quite significant. And also your favorite open source project, Unvoy has a lot of cat related tags on it. And obviously let's not forget about OctoCat, which you can see here and we actually deal with it a lot. First of all, let's define the need for automation. When a project has a growing number of contributors like Unvoy, there are procedures that need to be set in place. This procedure needs to be enforced and better yet automated if possible. We don't want any human doing all the bureaucracy by themselves, because then, you know, that just doesn't scale. So there is a need for something to be able to do it. For contributors, some operations need to be performed other customized constraints. For instance, every project has different needs and this needs to be customized appropriately. For instance, people assign people on issues PR, but only specific people can do that, or inform people about the other procedures like enforcing the PR body structure, for example. For maintainers, automation can help with issue tracking, specialize approval policies, and enforcing various rules, which can be done using some kind of scripting that I'm going to show here. When approaching this, I've seen in the past in big companies like Cliff and Twitter, when interacting with some kind of source control repository, is to build some kind of custom Github app or other app that consume events, for example, from Github and codifies the required behavior that is enforced and supported inside the company in code in this app. And this is usually not really reusable and a lot of containers are also crud and bit rot, and that causes a lot of pain when the company grows further on. Also, this is not really appropriate for a lot of projects that are open source, that not all the time have the correct appropriate resources to deploy and develop such a thing. There are some custom applications that are out in the wild, but combining them all together is a bit problematic. When doing such an approach, there's a lot of things to take care of, like maintaining the actual service, authentication, monitoring, high availability, secret management, adapting to Github API changes that happen once in a while, while they are much better at it right now. Dealing with Github API audities, which mostly contain concern ordering of events, and also something that often neglected, which is preserving issue context, which means sometimes you want to have some kind of store that contains some kind of state regarding European that you cannot maintain the set elsewhere. So, this is a traditional approach and what actually I'm proposing here is Repokita. Repokita is something that does actually most of the work for you. It's replacing your internal Github app or some other integration that you're using. It is one product that takes care of most of the, let's say, non-business related issues. For example, all of this is now taken care of if previously you needed to maintain all this and care about all this stuff, now you need only to care about only the behavior that you need. This is actually done by formulating right abstraction to have your business logic live in and operate towards the other integration. If you create the right abstraction in play, there is no need to worry about intricate non-Github events related things like ordering or other oddities that are in the API. You only need to deal with the required business logic that you care about. For that, the right abstraction is required. I found that Starlark, which is part of Bazel, is actually perfect for this stuff. Starlark is a language intended for you as a configuration language. It was designed for Bazel build system but may be useful for other projects as well, which I'm very happy to use. Starlark allows for deterministic evaluation, hermetic execution and parallel evaluation, which is exactly what you need because this allows for a low cost low overhead serverless architecture where only Starlark code is executed. No containers are required here as we can actually take a script that has all the appropriate constraints that we need on it and just run it. It can do only what we allow it to do. Starlark is very easy. It's actually a dialect of Python with a very few differences and these differences are actually important because they allow all the properties that we like, aka deterministic, hermetic and concurrent. RepoKit is using the excellent Starlark Go module by Alan Donovan, which is amazing. This is how the RepoKit architecture actually looks like. There is a RepoKit GitHub app that you install on your repo. You don't need to write any new GitHub app. It's actually using the GitHub app that actually developed and given for your repo kit. The RepoKit engine actually takes consumer vent from the GitHub app and it can also call a various API functions on GitHub. It executes a Starlark trick on demand which will expand where they reside and how they are written very soon. And it's variable facilities that we can use in order to actually make it easier to do the stuff that we want. I think we can actually have a very nice UI that shows exactly what the RepoKit script actually doing or done and it's persistent and you can introspect it in any given time. We have secrets that can be supplied to the script and there is also per issue or per PR context which we'll elaborate on later. On the other side, there is also a UI currently quite simple but it's going to be improved later on which the nice cat lady here can interact with all the components using it. Demo time. I will demonstrate now how you can enforce a bug annotation in your PR body. There's a lot of stuff that actually included in RepoKit that gives us in order to focus on what we actually need to write and not tending various other services. For example, RepoKit includes tooling and API for a frequently needed capability, as I said before, secret management. This will debug output using tracing, GitHub API access, fine-grained permission model which you get up does not give you and we can actually use the RepoKit to actually enforce it. Modules with version pipning and much more. So how does it all work? In the root of the repo, there is a file called RepoKit.star. This file actually is the root module and whenever there is some kind of event that need to be accessed upon RepoKit and node, how to open this file and evaluate it. This is a very simple example of how to write a very simple RepoKit script. In this instance, we are registering a slash command. When you type, let's say, in a comment slash backport, what will happen? It will write the underscore backport script that will issue a label that will put a label on the issue or the PR that says backport review. I included references to documentation of RepoKit in each slide, which I hope I can send later and you can just click on it and see how it works. Handle's command actually register a command handler that is executed on slash commands. This is a simple demonstration from a PR on Android. Someone did slash backport and you can see that RepoKit added required label on the PR. This is an example of documentation from the reference manual that you can actually look upon now if you want. It shows you the GitHub module that has an issue label function, which is the description of each argument, and also it can point you to the appropriate GitHub API that it actually accesses. This is an excerpt from a module that supplies by RepoKitter, which you can check out later. In this case, we register a pull request event. When we get a pull request event from GitHub, we can actually perform stuff on it. If you are not really familiar with pull request event, just go to the link and read about it. It's not that difficult. In this case, we handle the synchronized action in the pull request, and you can actually see that we're doing some high-level operations here according to some criteria that we saw in this module. Handlers receive context when executed. As you can see in the definition, there's pull request action and labels. These are actually populated dynamically. It depends on what parameter you specify to the pull request. You can see where these parameters come from a bit later. This is an example of the use statement. The use statement is used to load a module that is defined in RepoKitter, or you can write a module yourself. First of all, you specify the path for the module that you want to load. It needs to be resizing in the private directory as long as the application has permission to access it, depending on what you install it. Its configuration can be supplied to the module once being loaded. The module will actually, every event on it, an event handler will accept this configuration and be able to pass it. Some generic modules are supplied for you as part of RepoKitter, and they are open source, and they are in this path. It's actually being supplied with a file method, which is registered using Handler's command. When there is slash check owners being typed into a comment, the reconcile is being called with the config argument. The config argument is supplied to get specs, which can load the path's parameter from there. There is another way to load modules, which is using the Starlark load statement. Load actually instead of registering modules and stuff like that. What it does, it brings a function defined in another module, may it be an internal RepoKitter module like test here, or like on another third-party module like utils and circles here, and brings a function that is defined there inside the context that is being evaluated. For instance, the text.match here is being called after loaded. So what is the difference between loaded use? Load is a Starlark built-in, which brings in a function from other modules. It can be called from any module. Use is a RepoKitter function that registers Handler from other modules. It can be called only from the root module. Note that use does not bring any new variables into the local context. It just tells RepoKitter to load that module and register its Handlers. RepoKitter allows for a state to be stored for an issue or NPR. This is useful when a state need to be stored for use later. For example, in this case, in the excerpt from RepoKitter owner check module, we actually store who approved which passes in a PR. This allow later on to see if all relevant passes were approved and by whom. You can supply Secrets to module. In this case, we are using the GetSecond function to fetch a registered, a register a secret that was supplied using the RepoKitter UI to a module. We are using the GetSecret function. The GetSecret can be called only in the root module, meaning only in RepoKitter.star. In this case we actually can also specify secret URL to HTTP. When any parameter begins with the word secret underscore, it will not be seen in the traces later on and will not click RepoKitter debug information. It will be a little more clear when I explain traces. So, here I'm explaining traces. So, let's say that RepoKitter issued some kind of comment in your PR. In this case this is an example for owner checks, customer owner checks for ANVOY and you can see that there's a little smiling cat in the bottom. When you expand it what happens you can actually see a little bit of debug information about how this event is processed. You can actually press the trace link which will bring you the tracing information page that you can see exactly what happened with this event. First of all you can see the event payload from Gitar which is long but it can be very useful for debugging. You can press the evaluation tab and then you have a lot of information about how the event was actually evaluated. First thing that it's interesting is the context. The context is what the parameters that are given to the handlers can take the information from. This is an example of a context. Every filter can be consumed inside an event handler. All you need to do is just specify its name as the parameter for the event handler and it will populate for you when it's called. Another interesting stuff is that calls tracing. If you expand the calls you can see all the methods that were called from your script. The dollar sign before they use say that this function can be used only in the root module. You can see all the modules that were used along with the configuration since the secrets because it begin with secret underscore and you can also see an example github call and exactly what the github call returned. Let's see now pinned ref and modules. Pinned ref has actually all the references that were pinned. In case of an envoy we always pin the repo key to modules to a specific SHA. In that case if the module is getting updated we can actually re-pin it in an envoy to adopt it. And these are all the modules that were loaded along with the exact SHA that were taken from. This is very useful for debugging obviously. So let's talk a bit about repo key test specifically in envoy. You can see documentation for maybe command that envoy is using in the link above. Also the root modules obviously it's in the root of the envoy repository. Envoy specific modules will live in envoy CI repo key the modules. These are modules that were written by envoy people not me or anyone else on the repo key test side and are just being used. Available commands that are ready for you which you can see again in the repo key .md on the top is Assign and slash review to assign specific people or assign as review specific people to a PR retest and retest circle which are relaunching circle CI or AZP which is very useful. Wait and wait any which is mostly for the containers when they're expecting some kind of response from someone they do slash wait and it will label the issue as waiting and whenever someone else or even them write a new comment it will just unlabel it and then they know there is a reply there. And another command back port which adds a back port label to a PR. Also how we talk made an awesome custom made owner check for envoy which you if you contributed to ANVA you probably know about that alerts when specific owners need to review specific passes in the PR. So github actions you probably under asking okay so what's wrong with github actions nothing is wrong with the github actions github actions essentially supply the same functionality. The difference is that actions are more optimized for long running processes like CI and deployment. You can do shorter stuff but will battle slightly higher latency for the stuff which is not very comfortable. They are more resource intensive and are generally more cumbersome to implement as they require docker and containerization which has more layers of moving parts thus resulting in a slower turnaround time and time to take you to actually get the action to run and to be developed. Reapockety is optimized for short running actions enables much lower latency faster turnaround time and lower cost. Because we are actually using something that is very very low over at last Starlock in the process or in a separate process for higher security. There is no containerization waiting for Kubernetes to spin up another port or container and stuff like that it just get the event and runs as fast as possible. We actually have very few instances in GPCP for an Android and I think the highest CPU that I ever got was about 2% or 3%. So if you need something that is very low cost and very lean and mean, this is for you. So what's in the future of Reapockety? There is still a lot to be done. It's working very nicely for about 2 years for an envoy now but I would really like to integrate it into GitLab which will be awesome, which I'm working on right now. Also it shouldn't be hard to take it and put it on premise if you need, which I was paying a lot of money let's say for to GitHub. Improved UI, currently UI is pretty minimal, it's very effective but it's very minimal. Also script testing and some kind of GitHub fake to enable this script testing is very high in the priority list and also something is very high even more documentation that I have now in order to onboard more people. So my experience shows me that people actually getting onboarded on Reapockety very very fast which make me happy. I'm looking for more projects so go and go to www.reapockety.io-weightlist and if you want to sign yourself there and I'll get to you the moment that I can onboard more which is pretty fast, I just want people to come. For more information there is www.reapockety.io you can have a link there to documentation a link to open support tickets please sign on the weight list if you're interested and also I want to do some kind of hand-zoned lab session when enough people are going to sign in just go to www.reapockety.io-lab sign there specify your time zone because this is important and we'll organize some kind of lab session or maybe a few of them to teach more people about Reapockety. Thank you very much. Questions? Hi. I hope you can hear me. If you have any questions I'd love to answer them. Hey, good afternoon everybody welcome to listen Feel free to ask questions in the chat Yeah, Ryan Yeah, I'm considering it I need to think about it some more I don't think the code is really ready to be open sourced it needs to be I think maybe better documented because I'm the one who records on this but yeah, I would consider it depending on the interest and if you want it open-sourced just let me know whoever is in here. I'd like to say again that if you're interested in testing Reapockety or interesting in some kind of lab go to www.reapockety.io you should have links there I think both to the lab and weight list in any case it's www.reapockety.io-lab or www.slashweightlist Also if you have any question you can email me I'll post in the chat the email for further questions Thanks Harvey If I knew I'd have time I'd be more demos but yeah maybe some other time Visan will just enter the session here did some interesting plugins for Reapockety Also Harvey did the honors check which was really cool based on my honors check he made it more specific to Anvoy I can say enough how much Starlark is awesome especially the implementation by Alan Donovan and yeah I think that there's really no need to create a container for anything just go and you know if you have some kind of gel environment like Lua or Starlark you can actually use this maybe I'll talk in the future some other place about the benefits of such environment I think it would be very cool I'll give another minute to question if there's nobody I'll just leave if you do the lab I will make you a Reapockety certified engineer this is cool I'll make a sticker if you have any questions feel free to email me and also register on www.reapockety.io and yeah I'll give you back the time bye bye