 Good morning, everyone. So there are four leads. And I don't think any of us is responsible for welcome slides. So I didn't have a speech for welcome. So I'll just jump straight to the part I'm going to talk about. And OK, let me try to fix it. So first, I will introduce myself. So I'm Alex. And I work mostly on Argo CD. I've been working on the project for like five years. I guess that's why I'm representing the project today. And I will cover project state as of today. I will talk about what we're going to release in hopefully next week or week after next week. And I will cover a few roadmap items. And so the first slide is just a summary of current project state. So as you can see, basically the project is keep driving. So we keep getting GitHub stars. And Geeks stars itself, they don't matter much. But it's a good indication of how healthy the project is and how many people it's attracting. And so in this year, we've got 3,000 stars. And it basically project, finally, bypassed the 10,000 stars mark. And I think it was really exciting for everyone who work on the project. And I think even better indication is how many people, better indication of project popularity is how many people are contributing. And so we've got around 3,000 contributors, contributions since the beginning of this year. And that includes commits and bug reports from almost 400 people. And it's awesome because that really proves that project is doing very well. And the result of all these activity features and actual code and documentation changes. And so we've observed it since maybe three years ago that project is moving forward very consistently. We are literally getting around 100 commits every month. And this year is no different. So to this month, we've got around 900 commits as usual. I mean, that's the usual number for nine months of work. And project also releasing, publishing new releases around every three months. And this is still happening. So we've published two releases, 2.3 and 2.4. And 2.5 is delayed, but it's also a very common thing. So we delay release by like two, three weeks usually. And 2.5 is coming. I think it was supposed to be released maybe a bit ago, but we're still working on last minute bug fixes and finishing some feature polishing. But basically, it will be released hopefully soon. And I wanted to give you a little, like just describe you of what's coming in 2.5 release. And I will walk quickly through features. And I think because there is four of us, maybe it doesn't make sense to ask questions now, but I'm pretty sure we will have time after this talk to answer questions. So I'm just proposing to ask if you have any particular questions about those features, ask it when all four presenters present. All right, and so 2.5 features. So the first one, I think the most anticipated feature for the last year is multi-sources support. And I'm hoping everyone know what source means in Argo CD. If not, I will just quickly repeat. So Argo CD has a CRD called application. Application has a source and destination. Destination is a Kubernetes cluster in the name space. And source is a Git repository or Helm repository. And so it worked perfectly fine for most of the use cases, but there are several important use cases where users wants to specify not just one repo, but many repositories. And the most prominent use case is if you have a Helm chart that you cannot control, so-called off-the-shelf Helm chart that's maintained by some community. And you have your own values that you want to use to generate manifest out of the Helm chart. And this is the most common use case. So as of 2.5, you will be able to take your Helm chart, don't modify it, and then connect kind of your private Git repository with the values that are specific for your environment. And you won't have to create a Git repository just to combine the Helm chart and the value files. So this feature, to be honest, not get merged, but the pull request has been open for like a month. It's got a very healthy, like the review and bug fixes, process is going very actively, and it has a really good chance to make it. Most likely it will be there. Okay, as I said, like no questions, I guess for future site now, so I can just move on to the next feature. Next is a server-side apply. That's maybe not the most visible feature, like you won't see a much of changes in the user interface, but it's a very important feature. It's a, I would say it's a quality improvement feature. So server-side apply refers to Kubernetes server-side apply. If you don't know, there are two type of ways to push changes into Kubernetes cluster. And if you use Kubernetes, you basically use one of those ways when you work with QPSYTEL. And so client-side apply requires a lot of calculations to be done on the client-side. And sometimes those calculations may be not accurate. And another bad side effect, basically the object that you modify have to carry a heavy metadata information. And what it's bad, it's not great because sometimes you want to modify a very large object, such as custom resource definition or a config map that has a lot of data. And the metadata basically supposed to include the whole object. So that means you have to store data in your Kubernetes cluster unnecessarily. And sometimes you cannot even, basically the limit of your object, the total size of the object is one megabyte. And if you reserve a half of the size to metadata, that leaves you only 500 kilobytes, which is not enough. And basically server-side apply, resolve those limitations, you no longer need to carry that much of metadata and you can store more in Kubernetes cluster. So and Argo CD finally supports this way of modifying objects in Kubernetes. Yeah, so that's actually, it will really improve performance of the Argo CD itself in the cluster that it manages. Okay, the next feature is another feature that maybe you're not, as an end user, you're not going to see much of it. But if you are operator who run Argo CD, you might really like it. So this feature lets Argo CD to serve Argo CD application definitions in multiple namespaces as opposed to single Argo CD namespace. And so what that means is you can now install Argo CD into the managed cluster and then don't give access to Argo CD itself to your end users. And instead you can tell them, just create application in the namespace that you have access to. And that's one of the use cases that this feature solves. Before the end user must have created application only in Argo CD namespace. And that makes a lot of, introduces a lot of requirements. You must trust that user. You must, basically the user has to be administrator. So this support of multiple namespaces, your end users can create applications using Qtcl and they don't have to be administrators and they are protected. They won't be able to shoot themselves in the food because of how this feature is implemented. And we can talk about details later on this. All right, I didn't tell us before I started talking about this slide how many major features we have in 2.5. Yes, the next feature, it's another long time anticipated improvement that ability to manage application sets in Argo CD using Argo CD API and CLI. And the quick for the context application set is another CRD in Argo CD world that automate creation of Argo CD applications. And because it was meant for Argo CD administrators, first it was only available as a custom resource that you have to manage in a cluster itself. So you must be an admin to be able to create application set or modify it. And we kind of, we intended to expose it to end users and that feature is the first step. So we've now created API and CLI that uses the API to create and delete and modify application sets. And the next step is to introduce user interface and this is planned for 2.6 release and 2.5 only got API and CLI. And then finally a feature that you're going to see for sure, so it's actually a set of enhancements in the main web page that all Argo CD users frequently use. So we have application details page, which is a page that visualizes application state and provide users controls to control the application. And so there are a bunch of improvements. And I just, I think I listed like top three that I noticed, but there are more. And so basically the one is, there are kind of a new set of ways how you can grow our Argo CD application resources. And I won't describe all the details of grouping, but idea that Kubernetes applications might include hundreds of objects. And the first version of Argo CD was just showing all the objects in form of a tree. And sometimes it's just impossible to make sense out of the tree because of there are too many objects in the page, but those objects, they have different kind of relationship. And one example is, let's say in a network view, you could see object service and all the ports that the service send traffic to. And in 2.5 version, you have a way, basically you get a button called group. If you click that button instead of a tree, you will get the services representation that kind of wrap all the ports they send traffic to. And what it makes suddenly, you get the same amount of data visualized on the page, but it takes way less space, way less space. And it's much easier to understand what's happening on the page. Okay, and there is one last feature that I wanted to cover. I don't think it's the biggest one, but I just know it because I worked on it as well. And basically, Argo CD has a way to authenticate access to EKS clusters, clusters that Amazon runs. And for a long time, people were asking for the same feature, but for GKE clusters, for Google clusters. And finally, it's possible now. So you no longer have to package any kind of flash script into your container to access GKE clusters. Now you can just craft a secret that represents a cluster and you just need to specify a couple of parameters that points Argo CD to your GKE cluster and it will handle the authentication logic to generate the token. And this is it. That's, okay. That's all about 2.5 release. Yeah, and I'm pretty sure there are way more features that might be curious about and feel free to ask after the presentation. Oh, God, I had one more slide. So the roadmap slide. And this is, again, it's like heavily, I chose features that I think are important and there are a bunch of maintainers here in that room. And I'm pretty sure we have other features on the roadmap that maybe can even make it quicker than the features on that slide, but feel free to ask questions about these features after. And so what I believe is kind of on roadmap and will be most likely part of 2.6 release is application sets in a user interface. So we just made it, we just created CLI and API. So it just makes sense to kind of continue and finally build the user interface for application sets. Another feature that actually it's been in development in 2.4 and 2.5 release. So the config management plugins enhancement feature. And what that feature brings is ability to expose any config management tool such as Customize or Helm or Grafana Tanka and so on in form of a plugin. And so basically you should be able to connect the tool just by making some config changes, but what the end user get looks like a first class support. And this is what this feature is about. So far we, I think we almost done with most of backend changes to make it possible. And in next release, I hope we will finally leverage the backend changes in Argo CD UI and finally deliver the feature. Yeah, so in next enhancement, it's a merge of Argo CD image updater into Argo CD itself. And so probably you know what image updater is. If not that I will give you a quick summary. Basically it's a tool that lets you connect your docker registry and teach Argo CD to automatically upgrade images every time when a new image is pushed into docker registry. And this little sentence I wrote, basically what it gives you is, it's like a seamless CI CD integration without scripting. So without that tool, you usually have to write a Jenkins step or GitHub action script that updates the deployment repository. And it's such a common use case. So image updater just automates it. And it's been available, the project has been available for like a couple of years already. It's definitely stable. People use it in production. And finally the maintainers of that project are opposed to just merge it into Argo CD, make it available to everyone and everyone agreed. So basically there are people who already committed to work on it, hopefully it will make it into next release. And then last couple of features that I found in our roadmap. I know that it has a lot of user requests and basically I use this meeting to remind all the maintainers about those two features. So one is ability to specify secrets in Argo CD parameters. And this is the, I think it's like maybe the most, one of the most famous features from among end users because it kind of brings secret management into Argo CD. At the same time, maintainers didn't want to commit to build it because it's like a huge chunk of work to introduce secret management. And then I think we kind of, in the contributors meeting discussions, we agreed to build a compromise. So there is a, we want to let users manage secrets in Kubernetes secrets and just reference them in Argo CD parameters. So it's a good compromise, basically allows end users to connect secret management solution that they want. And then just leverage secrets in Argo CD and hopefully it will solve the final use case. And users will be able to use secrets to generate Kubernetes manifests without scripting. And then finally another feature that can help a lot of CRD, basically Kubernetes project maintainers. So ability to specify parent-child relationship between objects. And the best example is I guess maybe Argo CD itself. So in Kubernetes, there is a native way to specify which object produced, like that you can specify that there is a, one object produced another Kubernetes object. And Argo CD leverages this native way to visualize the resource hierarchy. But in some cases, you do not want to specify that relationship explicitly, but the relationship still exists. And we want to introduce a configuration in Argo CD so that it can inspect maybe things like labels or annotations on the object and in for that relationship and visualize it in joy. And this is it, end of my slides. Thank you. Hi everyone, my name is Jesse and I've been working on the Argo project also with Alex for five, six years now. And I'll be giving an update on the Argo rollouts. So first some statistics, we're at 1700 starts and I'm happy to say we suppressed Argo events recently. So it's a 48% I mean, it's not a competition, but we're not last. Yeah, it's a 40% year over year increase. So it's, I think it's a healthy increase. This year we've done two major releases of 1.2 and a 1.3 release, which I'll be going over. And that included 36 new features and 66 bug fixes. And in terms of contributors, we have a total of 96 and 16 of them contributed for first time this year. All right, so I wanted to see. So 1.2 actually released a few months ago, but I thought it might be useful to kind of remind people what that was about because I'm feeling not everyone is following rollouts as much. So I'll quickly go through 1.2 before getting to 1.3 and then the roadmap. So 1.2 introduced something called a dry run for analysis. So one of the things we're trying to do with rollouts is to enable people to get started better. And so one of the problems people face going to rollouts is that, hey, I don't trust my metrics just yet to gate my releases. So let me practice with some dry runs. So you do have the ability in rollouts to mark an analysis as just let it run, but don't, if it fails, just show that it failed, don't actually abort the release. So that's an analysis dry run. Weighted experiment steps is a feature. If you are running an experiment step in your rollout for the purposes of baseline versus canary comparison, so it's a slightly different nearing than just traffic split. But basically up until this release, we didn't have the ability to actually leverage the traffic routers like Istio, for example, to actually do like a three-way percentage-based split between like let's say 5% to the canary, 5% to the baseline, and then 90% to your existing production workloads. And the reason you wanna do this baseline versus canary is to have an apples to apples comparison of your canary in the baseline. And you get that because the canary in the baseline will start at the same time, and that way all your metrics are equal or comparable. So that's available. Ping Pong service management, if you are, this mainly applies to ALB. So if you are using Iduis load balancer controller, one of the challenges with that, their implementation is that they don't like it when you change the service selectors from underneath the controller because you can't take advantage of pod readiness gates in the controller. And we've implemented like workarounds around that problem through just actually talking, making Iduis API calls to kind of verify like, hey, did my weight actually take effect. So what this feature is about Ping Pong service management is that we actually shift, the reason why it's called Ping Pong is because we shift the weight back and forth to the two services so that we're never changing the service selectors of the service at an inoff-routine time. And so this is compatible with the way Iduis load balancer controller works is something the Iduis team recommended in the issue to do instead of like changing service selectors. And that just supplies in general with Iduis load balancer controller. Just know that they don't like it if you just change the target of a service. And then we also added app mesh support in Iduis. And high availability is, and some scalability performance improvements. And those all came earlier this year. Okay, so let's talk about what just released today, one not three. And so there are two things that we've been talking about for years, header-based routing and traffic mirroring. And then so if you are using like a more advanced traffic rather, namely Istio, it's the one we only support today, but you can actually leverage their capabilities for doing something like a header-based routing. So what this is with typical Argo rollouts you can do percentage-based before this, but now there's a new canary step where you can say based on this HTTP header, send all the traffic to the canary. And so this, you know, maybe if you need to have some stickiness while you're canary rather than just some random percentage basis distribution, this will let you do that. Maybe you want all your Firefox users to get the new version or something like that. But that's now possible with the header-based routing. Similarly, traffic mirroring is a feature available in Istio where you can shadow your traffic to another service, in this case, the canary. And it's mainly used for a read request, so it won't shadow your quits and posts, but for like gets and stuff, you can kind of see how your canary is behaving by mirroring the traffic to that service. We added support for traffic. So if you're using the traffic as your ingress for rollouts, you'll have native first class support in rollouts. And there's some improvements to the UI, the rollouts dashboard, where you can expand the canary steps. So previously you can only see like what the percentage had shifted, but there's actually some more details that you can show in there. And the other improvement to the UI was analysis, when you hover over some of the analysis failures or successes, you can see some tool tips about like what actually, how it failed. And, oh, InflixDB was added as a first class support. But I think the thing that excites me the most about this whole list is actually the last one. It's not actually a rollouts 1.3 feature, it's actually a new repo, a new project that was created to have better native support for Customize. So Customize is actually quite challenging to get it to patch the way you expect it to. And now you can just add a line of inside your customization to reference a OpenAPI spec. And you should now get kind of the same style of patching that you enjoy with deployments, but with a rollout. So that's something that has actually took a lot of work to get right, but thanks to Zach for leading this effort. All right, in terms of what I think is important for rollouts next, I think we've been accumulating a lot of backlog on issues and probably a lot of PRs to get to. And so I think it were due for a hygiene slash maintenance release to kind of catch up on this. At least in my opinion, I think we're a little overdue for just kind of catch up. And then the number one probably upvoted issue has been to add authentication to the dashboard. And I've actually kind of been resisting this because the dashboard has always been this, actually started out as a just a local host interface to kind of view a rollout if you had a cube context to your namespace. And that's kind of how it's starting. That's how we just thought it would exist. But then people asked for, oh, I would like to run this as a hosted service in my environment. And so then we said, okay, that's not too hard. It's all you need to do is like add package and an image and create some manifest for people. And so now people are asking, okay, I'm running it, but then now anyone who can get to the UI can like, abort a rollout, promote it and do whatever they want. So I think there might be some ways we can try to add a lightweight auth without going full blown like we did with our Go CD or workflows. But I think it's definitely the number one popular issue in rollout, so we should explore what it should take. Oh, gateway API is if you are following what's happening in Kubernetes, they are introducing new, they often refer to it as ingress v2. It's like a new standard for specifying ingress. And pretty much all of the major ingress providers are on board and are providing an implementation that supports it. And so if rollout supports it, and it will because there's already a PR for it, it means we can get a lot of ingress support for free. So Contour, Kong, Kuma are all ingress controllers that actually support the gateway API. And so by just, hopefully if we implement this, it'll be kind of the last of the integration, first class integrations we ever have to do because then we'll ask the other, we'll expect that the other ingress controllers can implement the standard. And the last one is better enablement. So I mentioned this actually last year, but I think one of the challenges with rollouts is just get them started and we could do a lot better in terms of things like documentation, how-to guides, providing just example templates for the well-known metric providers. So I think if we could do just some of these easy guides and stuff, we could get better adoption. One thing that's not mentioned here is I do think also tools, like utilities that help people convert or migrate to a rollout would go a long way to help adoption as well. And I think, yeah, that's all I had for rollouts. Thank you. Good morning, everyone. My name is Saravanan Balasubramanian, Alice Bala and software engineer in Intv and co-lead for Argo workflow project. I have been working in the Argo workflow project for four years. Let's go to the summary of 2022. So we are reaching 12,000 stars. This year we got like a 2,000 stars. So please go get a star if you haven't done that. So it will help us to reach 12,000. And we have like a 600 plus contributors and we got like a 180 new contributors this year. And we implemented 120 on features this year and fixed 250 bots, so less bots. And we have like a total 1800 commits and we done like a two major release this year. Let's go to the release. The 3.3 was released a few months back. Just I like to touch base like what are the major feature we had written that one. The one is like a lifecycle hooks, which will help the user to integrate with the notification and monitoring system for the workflow lifecycle. So if they want to integrate the Slack or page 2 tip for the workflow lifecycle, they can use that lifecycle hook. The second one is like a plugin. The plugin will give the user to extend the target workflow to define their own templates, which is a very powerful feature which was released in the 3.3. And we have like some of the open source plugins in our GitHub lab for the Slack integration, Python integration and all the things. Please go get and take a look. Then another one use case is like if your organization has like a proprietary functionality which cannot be open source but you want it in that target workflow, you can develop as a plugin. The third one is like a multi-tenant support for SSR back. So the 3.2 or the UI will support like a centralized cluster level or backs. So 3.3 is enable. You can configure the namespace level or backs for the Argo UI. Let's go to the 3.4. This is the one we released it today. So 3.4 we are mainly focused on that artifact management. It comes with the two enhancement. One is like artifact visualization. Another one is artifact GC. The artifact visualization is enabling to display the content in that Argo UI. So the user can directly see that artifact contents in that Argo UI. I just put a snapshot in that. So it can show that the image which is the artifact generated by one of the over flow step, they can, when they click it, they can immediately see it in that UI. They no need to go to that any cloud storage or anything. The second one is like artifact garbage collections. So another problem user is facing like to cleaning up that artifact which is generated by workflows. So we implemented artifact garbage collection so that user can define the strategy in that workflow when they want to delete the artifacts. So whether in the workflow completion or workflow deletion. There is a two strategy was available now. So then Argo controller will automatically go out and clean up that artifacts. What are the artifact repository they can't figure? The second one, the Argo workflow was completed that security audit by ADLogix. So everything's compiled with the CNC of security norms. The third one, we enabled that Azure block storage for that artifacts. These are the main features we released for the 3.4. Let's go to the roadmap. So these are the roadmap. So mainly in the 3.5, we like to step back, clean up our backlogs of bugs and establish that what are the features so far we developed. So you're giving a little bit time to the community to start using the new features, find a few bugs so we can help to fix those bugs. And we have like a few tech depth. So we are like to complete those other things in 3.5. And the 3.6, still the multi cluster workflows that top demand on the open source community. So we have a POC done with the Alex column. So still we are trying to find like a real use case from the community and trying to get a community contribution on that multi cluster side. I think that's it from my side. Thank you. Good morning everybody. Welcome to the second Argo call. My name is Derek Wong. I'm a software engineer working at Intuit. I'm also a lead engineer in the Argo events. Open source community. Can you hear me well? Yeah, okay. Today I'm going to give you a maintenance update so that you could have idea about what we have been working on. I would like to start with the project review to see what we have achieved in the past nine months since last Argo call. We had two major releases, one of six and one of seven which contain 42 new features and 70 bug fixes. And there were one line of seven commits in total. All of this were down by 56 contributors. And our GitHub stars grow from 1200 to 1600 which is about 32% of increase. And we're now like 40 stars behind Argo blog. So please go start if you haven't. About features. One of the most important features we have with the past releases is like we had a new Yvonne bus implementation which is based on JetStream. JetStream is a product from another CNCF project last. And the reason we had this JetStream Yvonne bus is because our original Yvonne bus implementation which is also based on another last product named Not Streaming. And that is going to be the end of life in 2003. So we had the JetStream as a replacement. Another reason for this JetStream Yvonne bus is JetStream is not only a persistible messaging system that also provides some other fancy features such as Kivalo store. So we leverage this Kivalo store to eliminate some of the restrictions we've previously had with Not Streaming Yvonne bus. That makes Argo events even more powerful. Another nice feature we had is the Yvonne source filtering. That's a great supplement to our existing sensor filtering features. And then with this feature you have the ability to fill out those messages you are not interested at the sort of Yvonne source level so that you can reduce the messages coming to the system. We also enhanced the sensor filtering features and now supporting more logic operators. You can now use any or any combination of these in all operators to do your filtering. Now also we support Lua script. You can use Lua script to write your filtering logic. Our supported event source family is also grouped a lot. We're now supporting Bitbucket Cloud and Bitbucket Server event sources. With existing GitHub and GitHub event source you can use Argo events to do any BitOps or the popular source management tools. We're also supporting RedStream event source. Yvonne transformation feature is another amazing one. That gives you a chance to transform the event you receive before it passed as a parameter to your downstream trigger like workflows or lambda stuff. In trigger condition reset that's another nice feature it's widely used into the batch processing platform. It gives you a chance to reset the unfinished conditions based on some prong or timing setup stuff. In terms of the controller department we originally had like three different departments for the installation. Yvonne bus controller, sensor controller, event source controller. We now combine all of those three into one controller manager department and we also enable HHA for it. So that will simplify your installation process. Security enhancement. We did another run of a security audit which was conducted by Adelogix and supported by CNSF and facilitated by SGI. There are several vulnerabilities discovered and the good thing is we fixed everything. We also published two adversaries so if you haven't upgraded to the latest Argo events please do so. That makes Argo events more secure. Going forward, let's see what are the items in our roadmap. The first thing you want to do in Argo events we want to enhance the feature of multi event source and trigger support. Today in Argo events you can put multiple events configuration in one event source object. Event source is the starting object which defines what kind of events you want to watch from the external. You can put Kafka event source and, for example, SQL event source in one event source object so that we only start one part or one set of parts to watch the external events to reduce your resource. Similarly, you can do multiple triggers in one sensor object and also reduce the resource you're going to use. But there's one issue with it. Every time you have a spec change on your event source or sensor there's going to be a pod restart to load the latest simplification. If you only have one single event in the event source object so you're expecting there's a restart that's okay for you, but you do multiple event source in one CRT object and, for example, I make some change to the Kafka config but it's unfair to the SQL event source and say, hey, why do you restart my service? So we're trying to figure out a way to do hardware loading just to reduce the pod restart times. That's one thing. The other thing is about the error or failure reporting mechanism. Let's say if you only have one event configuring in one event source object and then let's say the Kafka and you have a wrong broker URL config so it's expected that the Kafka events or watching service will not be running correctly and you will end up with either a pod crash or some error status in the CRT object an event source CRT object. So as an owner you can quickly know there's an issue with the fixer but with multi event source configuring in one CRT object let's still use SQL in Kafka, for example if you have a wrong Kafka broker URL config our current strategy is we just error out the Kafka events and watching service and print out some logs but for SQL events watching service it's still going to be running fine so the pod will still be running status and besides the logs you see in the pod and then there's nothing else to let you know so we're trying to figure out a way to have a better error recording mechanism so that as owner you can quickly detect those errors but also think about it to support loading the specification from other stores like config map or even database today or do everything in the CRT object that's another thing we want to the other item is we are trying to figure out if it's a good way to support the dynamic event source how do I understand that I will give you an example use case so today if you have several upstream services they're publishing events to a single Kafka topic and then to identify which upstream it is we use a type for example we use a type in the events type A B or C to identify which upstream it is and if you want to watch those events you're going to set up multiple event source objects with like event source fielding features type equals A I'm going to create the event source A and the event source B watching type equals B and then you will end up with many event source objects and then start a lot of the connections connect to the Kafka topic and if you have a new upstream edit type equals C and you also need to add a new event source give the field type equals Z so that's something you're not going to have it so why don't we just come up with one single event source watching the single Kafka topic and then dynamically publish events to different categories based on the conditions specified for example based on the type so that's the dynamic event source feature I'm thinking of the last item on the list is about the tracing open telemetry support that's quite straightforward as an event driven system we want to have some traceability there's another item I didn't list in the roadmap I'm thinking of if it's a good idea to support more event bus like Kafka I know there are a lot of companies they have managed the Kafka series they want to use the existing Kafka series as an event bus so you don't need to start any resource in your cluster and then to reduce resource or get better support things like that I think that's everything I have I'm going to hand it over to I think for the first time in tech conference history we're actually ahead of time we're supposed to have a break at 10.30 to 10.45 before we break the workshops so I'd say we take a break now and then we can meet at 10.30 instead of 10.45 is everyone okay with that we have to get a little bit longer break but then we'll get a bit more time for the workshop and an update to what I said earlier this morning we're actually going to do workflows and events on that side so cd and cd enrolas will be from here and there so if you cd enrolas that side, workflow and events on this side everyone cool with that there should be coffee I think there's a bit of fruit outside so feel free to refresh yourself come back in here at 10.30 we'll kick off the workshop we'll be right back