 and I've got the agenda. Oh, then I will pass to you, Liz. Excellent. Well done. I have no idea who's controlling the slides. It's Taylor. Oh, great. Okay, Taylor. All right, so Amy, can you just keep track of who from the TOC is here? Absolutely. Great. I guess you can check that from the participants. Yep, that's exactly what I'm doing. All right. And today it is all about We Flux. So who do we have from We Flux who's presenting? Alfonso, is it you? No, it's me, Michael. Hi, Michael. Hello, let me start. Take it away. Take it away already. Okay. All right. Cool. Shall I share the slides from my laptop? That might make it. I can keep sharing them if it's okay. You can just direct me to move to the next slide. Oh, okay. Cool. All right. Before we get started, let's just clarify. We're talking about Flux coming in as a sandbox project. Yes. Great. Okay, so natural question. What is Flux? Flux is a demon run as an operator in Kubernetes. And it takes Kubernetes manifests that are in Git and applies into a cluster. So it reconciles the contents of a Git repo with the Kubernetes database. And really that is, the effect of that is to extend the reach of your sort of, your system of record to Git, which means that you can operate on it using Git operations, which is pretty useful and seems to have chimed with people because it's a core mechanism of what has come to be known as GitOps. And here is a diagram of the flow of stuff going from the repo and being applied by Flux to Kubernetes. So next, yes, thank you. So yes, that's what I just described is that it extends the reach of the system of record to be Git and Git tooling. So as was pithily described by Stefan, you can replace Kubebectl apply with Git push. And you can do this also to bootstrap clusters. So there are systems we know of, including our own that use Flux and an operator that deals with the cluster API to drive the addition and removal of nodes from a cluster, for instance. A secondary thing that Flux does that is almost a consequence is that it observes the images that you're running, the container images that you're running in your cluster and goes and fetches metadata about them and then can automate the upgrade of images as you instruct it to. And it does that by committing to Git on your behalf and then synchronizing the cluster with the Git again. Another direction in which we're pushed to Flux is to make Helm releases also declarative and run through Git and Git tooling. So as before, instead of replacing Kubebectl apply, we've effectively replaced Helm upgrade with Git push. And in fact, that capability has really driven a lot of people to the community because I think that was something that's complimentary to the other stuff that comes with Helm which I'll cover in a wee bit. Next slide, please. So we think it's really important to set boundaries on the scope of what stuff does and Flux is really aiming to be at an angle approaching 90 degrees to other things that are in the Kubernetes ecosystem. We want to do one or maybe two jobs well and not try and get on other people's lawn. So we're not trying to replace continuous integration. We're trying to work with it. We're not trying to replace other operators or service meshes or things like that. We're trying to work with them. And the examples there, another example is secrets management. The idea being that if you're using sealed secrets, for instance, Flux will create the resources for you but then it stops there and lets sealed secrets or whatever other operator take over. We're not trying to be a packaging or templating solution. What we're trying to do as with the Helm support is work with those solutions to extend their reach to get and make them declarative when necessary. And similarly customize, which is on its way to becoming part of sort of core Kubernetes. And we're also not trying to do sophisticated rollouts or really sophisticated continuous deployment. That is better done at least in concert with other tooling like Flagger, Istio and so on. Next please. So these are all users that have added themselves or at least are on in our readme as production users. And right in the middle there, we use it ourself, of course. Yes, I'm not for a scan, I think. Thank you. So Kyle, if you don't mind de-cloaking and just saying one or two minutes about how Under Armour uses, Flux ought to be cool. Sure, yeah. So I guess just to give a little background, when we started our Kubernetes journey at Under Armour, what we started with was putting all of our manifests into a single repository so that it was easy for everyone at the company coming in to learn what Kubernetes specs look like, how other people are using them, just kind of learn through example. And one of the things that we realized really early on was relying on developers to do Coupetail apply themselves can kind of leave you into some sticky situations where a PR may hang for a long time but someone's actually applied it to the cluster or you just don't know the state of what's in the cluster as compared to what's in Git. And we happened to stumble upon Flux and we really liked its trimmed down mentality of just do this one thing really, really well. So we started to apply that to our cluster and we saw the benefits almost immediately. We never were questioning what was in the cluster, we could just look at the master branch in the repo that Flux is looking at and that's the source of truth and it's Flux is making sure that it is in sync and we weren't having problems anymore with like, oh, did this actually get applied or I made this change in a blue way someone else's manually applied change. It was just feeding everyone back into Git and that was the source of truth and it was so much more clear. And now we're starting to take this one step further where we want to have Kubernetes, multiple Kubernetes clusters that represent a region and Flux has been super clutch in this situation because when we, I'm an infrastructure engineer at Under Armour and when we spin up new clusters we'd have to like tell other people, hey, there's this cluster here you can put your stuff on it now with Flux. We don't have to do that anymore. We can just do that work and unbeknownst to the developer now their stuff is running because Flux is making sure that it's applied to that cluster. So we have three clusters that represent the US region. If things are going wrong with it we can shift traffic to one of the other clusters or if we take a cluster out we can, you know, upgrade it, bring it back up and we know that all the apps will get deployed onto it. So it's been a super clutch tool for us as in our Kubernetes journey. Brilliant, thanks very much Con. All right, let's move on, wonderful. Okay, so other things you could do there's, this is representative there are more things I just wanted to cover kind of two flavors of them. There's Jenkins X, which sort of gives you a toolkit for setting up pipelines for continuous delivery. There's definitely overlap here that, you know, I don't think it's out of the question that Flux and Jenkins X could sort of work together but they're significant overlap there. So that we could consider that an alternative. And Argo CD, it also based on similar ideas to Flux and works with some other things from Argo project. And like I said, they're as close in spirit to Flux and it's just a bit newer. It has some newer ideas in some ways. Sorry, yep, go to the next one. Thank you. There's some other projects which may be less in the way of alternatives but that people might think of when they're looking at this stuff. So Spinnaker is definitely one of them. And I think the main way Flux would compare with Spinnaker is that Flux is really trying to do one thing quite mechanically simply. Spinnaker is quite what we are told anyway it's quite a complex platform and it's not. It is a platform whereas Flux is really a tool and it's not trying to be an all-encompassing platform. So yeah, it might be a sort of lightweight versus heavyweight comparison with Spinnaker. Helm also would come up, I think, in people's minds. I would say Flux is complementary to Helm and in fact Helm operator makes them work together and it is pretty popular. I think probably the majority of people using Flux also using the Helm operator. Customize is sort of newer to our support. In fact, it's not even in a full release yet but I would say similar things for Customize. And with those, we're not really trying to replace those. We're trying to be complementary to them to sort of add the good bits of Flux to those other tools. Another thing that might come up is why don't I just write my own or can't I just drive this from continuous integration? You can of course and that works pretty well. I think Flux is maybe more in the spirit of Kubernetes of having a system of records and then a reconciliation process whereas continuous integration tends to be more event-driven. And again, we are not trying to be a sort of general pipeline thing. We're just trying to do the one job well. You can use Flux composed with CI pipelines by driving image releases from CI pipelines. Next please. Thank you. I'm not gonna dwell on this slide people can go back and refer to it if they are interested except to say that for quite a long time Flux was really a tool driven by our own requirements although it was always open source and the one thing that really brought a lot of interest and built more of a community around it was the Helm support. Well, actually two things Helm support and the idea of GitOps which was maybe a year or so year and a half after we started developing Flux. Next please. Ah, right, community, yes. I'm gonna invite Daniel to talk a bit about community. Sure. So yeah, as Michael said and you can also see it from the graph things changed dramatically since the initial Helm support was released. So we've seen more and more people come in and contribute to the code but people also wrote documentation, they wrote blog entries. We had a number of different integrations built. So one of the first ones was integrating Flux with Slack and then everybody tried to figure out like how can we do things the GitOps way? So Helm releases, Istio cannery, deployments, OpenFuzz. People really wanted to figure this out and people have given talks and workshops independently of us. And that's really nice to see and this is also why over time we have to start building more structure like having monthly calls, having a community mailing list and having a bit more process. But in general, like all our trends are going up. We're also seeing more people contribute to the codes. So we have already around 100 contributors. And while still a lot of them are like drive by contributions people who want to make things work for their own deployments or fixed issues they are seeing also see a lot of people come back and stay with the project. Cool, thanks, Daniel. Quick, it's more of a status thing than a roadmap perhaps. So some stuff we've added recently, two themes on that. One is being a better sort of Git tool by supporting signatures, GBG signatures. And then the other theme is being a more sort of widely applicable tool. So up to now, Flux has supported essentially YAML files, Kubernetes, Manifest and YAML files. And we have added support for driving things like Customize from Flux recently. And things that we and the theme continues stuff that we're hoping to finish soon by supporting Git repos that we only have read access to. And we're also hoping to release a 1.0, have a 1.0 release of the HALM operator and stuff that's coming up, supporting more than one Git repo and HALM V3 is no longer looming on the horizon but is a fact in the world. So we will have to look seriously at that, of course. So why CNCF? We are getting more contributions from outside and people are sticking around and making more than one contribution and in fact, hanging out in the Slack channel and helping other users and giving talks and so on. CNCF is a good way I think to have a kind of neutral venue where that stuff can happen, where some people might be put off by the fact that the property belongs to Weaveworks specifically. And given that it's open source, we don't really want to put those people off but rather they felt like there was not that obstacle in a way. There's also quite a lot of alignment if we think of flux sort of adding the Git superpower to Kubernetes that can also add that to HALM and customize and other things that are in the family. And another reason that is particular to Weaveworks is that it worked well for Cortex which I think has gone from strength to strength since being adopted in the CNCF. Next please. So here are some of the ways it aligns. I've actually covered these I think largely but just to reiterate. Fluxes is in theory, it's abstract but it's not tied to Kubernetes but the implementation is definitely tied to Kubernetes. It's strongly influenced by how Kubernetes works as well. So that's another tie. And it can be not only used to run applications but it can be used to bootstrap and manipulate Kubernetes clusters. So it operates on the infrastructural level as well. We have HALM support that's first class if not in quality, it's not a 1.0 release just yet. Although it's widely used, then it's first class supported with the HALM operator. It's its own thing. And customized also part of the Kubernetes family. As I mentioned, we now support generating manifests in the Flux daemon which would also work with other things but it was designed largely to support customize first and foremost. Okay, and I think, yes, we now move on to questions and offers of sponsorship. Thank you very much everyone. Thank you, that was great. All right, questions. So I'll start off. Can you go into some detail on the relation between Flux and Weave Cloud? Yes, that's a good question. So Flux has a plugin interface where it can connect to an upstream service. And the upstream service will relay commands to the Flux API. It's the same API that you can access directly via the command line tool Flux Kettle. So all the services really doing is proxying there. It will also relay events such as when it makes commits or syncs commits to the upstream service. You can run Flux D and a lot of people do without Weave Cloud at all. There's also an open source implementation of the upstream service called Flux Cloud, Justin Barak made, it's on GitHub. Is that enough detail? Plus we use Flux to deploy with Flux itself. Yeah. And I get that it can be used without Weave Cloud. I think one of the things just through the lens of sort of the CNCF, how tight is the connection there? So like looking through the repo, one of the things that I would be looking for is to have Weave Cloud be one option of many for these upstream type of services, which means removing some of the sort of hard coding of Weave Cloud being the default thing. So like when you give it a token for authenticating to the service, I would expect that you would also have to say I'm authenticating to Weave Cloud, whereas right now there's this default assume that of course you wanna connect to Weave Cloud. That's natural when it's actually a project that's sponsored by a company, but as it moves into the community, I would expect that we would actually be making that relationship more explicit, both in code and in documentation as such. Yeah, fair enough. So I think the default is actually not to connect to anything. But yeah, there's probably hard coded strings in there somewhere is. Well, yeah, I'm looking at like the Helm chart, you pass in a token and it just assumes that Cloud Weave works, right? Yeah, stuff like that. You know, I think like. So there's no ties in terms of protocol if you like. Right, yeah, it's one of those things where it's just the defaults in documentation all points to Weave and I think just, to engender a real sort of active community, really opening the door for that to be multiple implementations on the back end and encouraging that I think seems like a good thing. Yeah, sure, it makes sense. No objections here. The, I mean, that sort of stuff we went on to tidy up. Sorry, not exactly that sort of stuff. Things like the API, the format of the events, it's ends up stream, it's overdue for kind of being rationalized slash, you know, tidied up a bit. That could happen at the same time. Yeah, and I did see that there was an independent implementation of that stuff so that's actually really good. But it's not well documented or something. Well, it's not documented outside of the code itself, if you like. Yeah, and I think that would be awesome to actually see like, hey, if you want to actually send this to someplace else, here's how it's documented. Oh, you want to send events to Splunk or what have you, right? Yeah, right, right, so exactly. If we want to support a more generic kind of sending events, just, you know, webhooks style, then it would, well, it could work now, but it would be better if it had a bit of design input and so on. I'd also like to chime in on that. At Under Armour, we do use the Flux Cloud open source tool to send the messages to Slack and it is really nice. The integration was super easy. It was set it up as a sidecar alongside the Flux pod or inside the Flux pod and it sends events to it. I just had to point it at the Flux Cloud pod or container and then hook it up to our Slack channel and we have a Slack channel that all the Flux messages go into. Or just to be clear, so Flux Cloud is the independent implementation of the endpoint that it talks to, which is different from Weave Cloud. Right, which is the commercial thing that Weave does, but Flux Cloud is open source. Okay, I'm just talking at home. That's right, and it was written by Justin Barrett, who's now at Nisa Sir. I have a question on, this is that, Jeff Brewer, full disclosure, I'm into it, so we did an Argo CD project and we've been looking at some possibilities of merging the Argo CD with Flux a little bit. But I was kind of wondering, there seems to be like a philosophical and having each individual, one of your clusters running the operator, looking back at a Git repo and you kind of wonder like, well, how does, if you have like a pre-production cluster and a production cluster, how exactly does that work? And the philosophical difference with having a cluster that's kind of dedicated to continuous delivery over multiple clusters, which is kind of the Argo CD approach and I'm wondering like, how do you guys think about, and we'll have more discussions on this offline, but how do you guys think about that, essentially managed or a central CD service versus having the operator run on each individual service? Does that make sense? Yeah, so I naturally kind of have thought about things like I think the answer in respect to how Argo CD works is, Argo CD kind of spins up workers for each Git repo and Flux CD is more like Flux a different process. Of Argo, if you get what I'm saying. So I don't know if I would describe them as philosophically conflicting, maybe more like talking from different places. And just to add a bit more, coming from the other action, things like multi cluster, I think that's, it's sort of better in a way to stick to, at least I think it's better to stick to one thing and let that be an enhancement for other people to put together, perhaps. I see, so can you guys hear me okay? It was cutting out a little bit. Yeah, I can hear you. Okay, so you're saying like some like the thing that stitches it together, I guess that would be like what the WeWorks provide would be like the philosophy would be like, allow, have each cluster keeping their own state, have the service operators running in each cluster, but then have some central tool that more the user experience or the developer, the DevOps experience, tie the multiple clusters together and that way you can see what the state of the CD is across all of those, that's kind of the idea. Yeah, I think so. I think it crosses over into sort of user experience, user interface, how there's lots of ways of putting those things together and different ones that make different sense. So it's up to vendors or individual companies or whatever to do that. And uses flux as a piece in that rather than the whole thing. Specific of flux, either bit of software. Cool, thanks. Hello, this is Lee Zhang from Alibaba and we are evaluating PDoP solutions and we really like Flux approach. So one of the questions is how Flux is dealing with the role strategy of your applications. So you are actually building interfaces between different kinds of continuous delivery system or Flux has its own strategy or just relying on Kubernetes role strategy? So Flux mostly relies on Kubernetes to do the rollout work. This makes some things tricky. So it's possible to detect unqualified success of a rollout but it's often difficult to diagnose failures or whether something is just taking a while or is never going to succeed unless people have taken pains to configure it that way. So, yeah, the superficial answer if you like is that we leave rollouts to Kubernetes and we try to do our best to observe what's happening and report that back. Okay, you might want to check out the Argo rollout. There's an Argo rollout project that has canary and blue-green rollout strategies and that's where it might get interesting from a combination standpoint, right? We can have a project that's really, that allows you to implement different rollout strategies and then that's separate from Flux, which is really or Argo CD, which is really about matching the GitHub piece, right? Matching the GitHub repo with the state of the cluster. But look at Argo rollouts for the blue-green or the canary or a different rollout strategy. And I think they could work together, right? Yeah, yeah, Argo CD is also what we are looking at as well and we are really hoping to see if we can combine these two things together. So, for example, you can actually integrate Flux with Argo CD, with something like Spannaker and so we all have different kind of rollout systems and of course it can integrate it with Kubernetes itself. I don't know if that's the direction of the community. Do we, are we thinking about an interface between Flux with different CD system? Sorry, what was the question? I'm asking, do we have any roadmap or plan to have an interface between Flux with different kind of CD system so we can deliver applications to different kind of environments? It depends on what the CD system is in some respects. Continue delivery system. Like for example, you can use, we are hoping to see that if we can use Flux to integrate with Argo CD, but maybe it can also work with Spannaker. So, is that autobox integration ready in the community or we have a plan to do something like interface between these kind of things? I don't know whether Argo rollout works with Flux, if it does and it's very plausible that would be amazing. There's also Flagger which is a Weaver's project which has a similar aim I do to rollout, Argo rollout in doing sort of higher order deployments like AB and blue-green deployments, canary deployments, that sort of things using service mesh, things Istio and so on. I don't know about Spannaker, sorry. Okay, thanks very much. So I saw the, on the things that you want to do is support read-only get repos where you're just pulling from it. I think that actually will probably aid integration. I mean, for those not too familiar with the project, it actually has two modes, right status back into the get project in terms of which cluster is syncing. You just mentioned that, it also has a workflow where if you push a new image, it then goes through and actually re-writes. You reference the new image, essentially have you upgrade the image and then gets triggered to deploy it. So there's sort of a right get and read get flow that are built into Flux, is that correct my understanding? Yeah, so... Jeff, could you mute that, Matt, when you're... Thanks. Yeah, so there's, at present, there are two reasons that Flux needs to make it... If you don't care about that, then there is the other more historical reason, which is that it keeps a high watermark using a get tag. So there is a PR-open sort of halfway done, say for moving that high watermark if you don't care about the automation. And that way you can have just purely one-way flow from get to Kubernetes, which I know a lot of people would prefer. I think one just observation, really, that it's not so much Flux specific, but I think is an interesting aspect of GitOps is the audit trail that you kind of get automatically by having all these cluster operations recorded in Git commits, which is really nice from a security perspective. So that's just something I wanted to throw out there. I actually think that there's an opportunity here to actually standardize on sort of the audit sync format between different set of tools so that we're gonna have a bunch of things that are acting on behalf, being able to throw those into a common audit log sync for tracking would be interesting, but that's a different project. Okay, any other questions or observations or worries about Flux? Okay, I think that was really great. Does anybody on the TOC want to sponsor at this point or do you want to take that offline? Michelle's saying she'll sponsor. I'm happy to talk about it more offline. Yeah. Great, okay, cool. Thank you. All right, great presentation. Thank you very much. Is that the end of the slides, Taylor? Is there anything else? That that should be it? Yeah, that's it. Great, okay. Well, everybody gets quarter of an hour back in their lives. So thank you very much, everybody. Good to see you all. Thank you everyone. Thanks to everyone.