 Hello, everyone. Welcome to episode nine, I think, of chat loop back off. This is a program that dives deep into the realms of the cloud native landscape and messes around with our projects, our technologies, and kind of, you know, just here to learn. If we've never met before, pleasure to meet you. Thank you for joining me today. I am your host Jeffrey Sika, but many people in the community already know me as Geefy. I'm thrilled to have you here and join us for today's episode where we're going to focus on open feature, which is a project in the cloud native computing foundation. You know, the interesting thing is, I don't know offhand whether they are in incubating or not. I feel like they are, they joined incubating. Yay, I remember things. They joined end of last year. That's why it's fresh in my mind. If you've never tuned into this live stream, welcome. Hopefully my little tangent kind of tells you what you're in for. This live stream is a sibling to the similarly named clash loop back off, which is an event that has been a feature in the last few cube cons. In that one, I picked two community members against each other to accomplish a technical challenge of my choosing. And it's usually fun and silly random antics. It's meant to be laid back for the audience and just kind of general fun where we all get to watch people tackle something and maybe learn. But this stream is a bit more like a choose your own adventure. The idea behind this one is I will pick a topic or a project and then learn it live. I have near no background or information. I may have some preconceived notions and then we get to either validate them or prove myself wrong. Doing that, really, we just hope that you walk away having learned something new because I am definitely going to be learning something new and we can do it together. During the live stream, please feel free to drop questions in chat. I'll get to as many as possible. Like I said, we're here to learn together. So if you know the answer to something and I'm stuck on it, feel free to speak up and let me know or have me try something. Hopefully going to be interactive. All of that said, this is an official CNCF live stream. It is subject to the CNCF code of conduct. Please don't add anything to the chat or ask questions that would violate the code of conduct. Please respect myself, other people that are watching and people that may watch on YouTube later on. I'm leading into that. This video will be put on our YouTube channel afterwards. So folks, if you couldn't make the live stream, you can still try and learn along with us just asynchronously. Before we get going, a couple of two quick pieces of information that are actually very similar to what Jeremy mentioned last week. Kubernetes 1.30 is here. It is live. It is uwubernetis, which is meant to honor all of our fellow friends that keep the internet going of the, what is it, furry variety. It looks like a stellar release. Kat did an amazing job. And I love all of the hype and comments around it and people going, excuse me. But yes, that's true. uwubernetis. Also, next up, just another CFP reminder for KubeCon NA. If you were interested in speaking in Salt Lake City in November, please feel free to submit a CFP. What is it? It will close Sunday, June 9th. So you've got some time, but please take it from me. You do not want to wait until the last minute. It feels like every time internally we're looking at the CFP numbers, it's kind of like the steady, steady, steady state. And then the last 24 hours, it's like 95% of the CFPs. If you get it in early, it is so much easier mentally on yourself. All right, with all of that out of the way, let's get right into Open Feature. And let me share my screen, get nice and cozy. Make sure that VS Code is primed. Share screen. Entire screen, screen one, boom. Thank you, Magical Wizard in the background. So everything that I will do, any notes that I take and whatnot, as usual, I will add it to the CLBO repo in the CNCF. And commands and code that I run will also be there as well, just so we can, you know, kind of repeat the process of everything. Also, I need to update this with Paris's clash loop info. But that's neither here nor there. I have created a new folder for today's episode. It is April 18th. And let's get to, let's get going. Okay, new file, readme.md. There we go. Because I am lazy, let's kind of copy this format over from my Lima episode and update it really quick. I know that this is potentially boring to many, but it is going to save me time. And it will make the notes consistent. So let's talk about Open Feature. I have, like I said, preconceived notions about a bunch of projects, such as because I've been in the cloud native ecosystem in the space, I kind of pay attention to projects that doesn't necessarily mean I'm an expert in any of them. So what is Open Feature? Let's talk about that. Oh, that is so cute. Thank you, co-pilot. I'm pretty sure Open Feature is not a way to spin up QEMew images. Co-pilot definitely has its own opinions. Let's kind of do a glance through their docs. They're getting started. And then let's, yeah, let's try and find some different, let's say, feature flag examples in the wild. Okay, so what is Open Feature? Realistically, let me make this a little bit bigger. Actually, there we go. Co-pilot is doing the right thing for once. I'm going to close my eyes, actually, and I'm not going to try and say what it said. So what is Open Feature? Let's actually talk about what a feature flag is. When you are, and this kind of only really applies, in my opinion, to medium to larger size applications and microservices. When you get to a certain point, when you are developing code or developing applications, you may or may not want to be able to, on the fly, toggle a certain feature on or off. A good example would, let's use Twitter for a good example. Early on, they were shipping new features, things like DMs, things like being able to like or dislike something. Again, these are new features way back in the day. And if one of these different types of features may have caused the entire system to become unstable, they need to remove that ASAP to try and stabilize the system. Well, when you are typically thinking about that in application lifecycle, that's maybe reverting a commit, or trying to do a quick fix to just remove, say, the front end aspect of that. And then you have to then ship out code. And then, depending on your environment, shipping out code is not an immediate thing that happens. It can maybe have to go through CI, have to go through QA, and suddenly you're two hours into something where you're just trying to revert code that had shipped. Feature flags are a way where you can define, Jesus is almost like the name is a good indicator, a way to define features in your application and then be able to toggle them on or off without having to ship code. How close was I? I was really trying not to look at the screen. Open feature is a feature flagging service that allows you to control the release of features in your application. It allows you to control the release of features in your application by enabling or disabling them at runtime without deploying new code. Interesting. So let's actually see what open feature actually thinks they are. Standardizing feature flags for everyone. Beautiful. Well, what is a feature flag? Let's see how close I am. Software development technique allowing teams to enable or disable the change in behavior of certain features or code paths in a product without modifying source code. Well, I think that was pretty good. Hello. Oh, co-pilot, you're making me mad. And then what is open feature? An open specific, okay, so this is technically a specification. A spec provides vendor agnostic community-driven API. Beautiful. Not trying to duplicate this, but I kind of want to follow this pattern right now. So why would you do that? Oh my god, I'm actually going to turn this off for now. So why would you want to do this? Well, going back to the Twitter example, if you were shipping new features or say you were defining alpha features or beta features, you might want to disable them for the stability of the whole application or system, but still keep them around and maybe only enable them for a certain subset of people. By doing this with feature flags, it just gives you a lot more flexibility than having to, again, modify code and ship code and then build in these pathways for behaviors. Instead, it's all kind of baked into using however or using whatever you are using to manage ship and, you know, query these feature flags. So in a sense, and I'm going to probably upset people, this is just another database in your application or data store where you can define features and then query against them and also I'm assuming query who has access to them. So why open feature? It is a vendor agnostic way to I thought I disabled you. We should probably update that. So why standardized? Well, honestly, where we can standardize certain behaviors, it makes the most sense. It's kind of why Kubernetes, Odell, Prometheus, all these different projects are coming up and becoming de facto standards. An analogy that I like to use and some people may get it, some won't, if you are a Formula One fan. In Formula One, you know, it's a car race. The pit, the area where a lot of the engineers are sitting around, they all have radios and they radio to the car. Well, do you think that the Formula One teams actually want to pay attention or develop or design or manage any sort of radio system? Absolutely not. That's not where they want to compete. They want to compete with their race car and with their drivers. So where we can standardize certain things, we probably should. You don't want to compete against another company or another project by having a bespoke feature flag system. That makes no sense. You want to focus more on what your actual product or project is, not certain things behind the scenes that no one else besides your developers necessarily are going to be focused on. So this makes sense. I love seeing one SDK on the back end. I love seeing they have focused significantly on supporting as many languages as possible. Love it. Let's look at their docs. Let's see another description. Oh, this is actually even a better description of what a feature flag is. And more specifically, how it's usually run. So I said earlier, you know, there is a feature flag. Feature flags are kind of like a data store that you're querying and this kind of backs this up to an extent. So you would have a service that you are querying from within your application. The feature flag in client is simply the SDK that you are tying into your application and then you are defining and managing feature flags externally in this service. Flags are us. Open feature provider. I love it. For those that want to mess around with this, this looks like some of the cleanest documentation please tell me there is a getting started. Because one of the things I always kind of like to do is a getting started. That is usually the tutorials. Tutorials are the best way to mess around with stuff. One of the best ways that I learn or one of my favorite ways to learn is I'll just find they're getting started and kind of go to town. And then from getting started guides usually you can expand things out a little bit. I would prefer not to use JavaScript right now. I think I want to use go. So let me skim through this really quick. Basic knowledge of go. Go 117 or later. Got that. Docker is installed. So we're going to create a new go project set up a okay what looks like a simple HTTP application that's just going to serve JSON. Once we are sure that is running we can then add open feature into our little hello world application. It'll have us test it. Then it looks like we define. Yes, so we are going to define our flags in a JSON file. It looks like we spin up that flag server. Let me pop back over to these docs. We spin up our feature flagging service. Pass it to this JSON file. Don't worry we're going to go through all this together. And then we update our main.go to then talk to that feature flagging service. This seems pretty straightforward. So let's do it. I am not going to bother. Am I? Have I done that in previous episodes? I created subfolders. Let's do it. Sure. Go start. Let's pop open our terminal. As usual I will praise George for having the best operating system out there. Hashtag project bluefin. Okay so once we are in go start. Oh chat back off 2024418. Go start. Let me blow this up. And then I can bounce between it better. So go mod init. Go start. I am just going to init some go mods. Cool that should have created a go mod file. And then let's create the main.go. So again this looks like it is going to default message of hello. It's using, I have actually never heard of this open source library. I assume it's like an HTTP mux thing. HTTP framework written in go lang martini like API. So it goes fast. I like that. Okay where were we? Boom. So main function initialize gin the slash hello endpoint should just respond json hello. Okay main.go. Boom. We need to go mod tidy. Downloads all the things. I have in my VS code because I do quite a bit of go programming things like go tools automatically installed. So it'll essentially red is bad. Green is good. Just so people are aware. Having a nice and happy IDE is usually best. Definitely speeds things up. Alright so theoretically and I'm not even going to look at the docs for this. If I do like a go run main.go I should be able to hit local host. I'm hoping it'll also give a port. Maybe it's just by default 80. A engine instance running in debug mode. I am actually liking gin already. This is turning into a completely different stream. Now I'm wanting to mess with gin. Okay so environment variable port is undefined by default it binds to 8080. So if we do local host 8080 I'm doing this on purpose. 404 page not found. But if we do hello which is one of the paths that we did set up it responds with json and it just you know spits out hello. So that's awesome. Pretty basic but I think we're gonna start getting some cool things going. I really like that. So interrupt clean. Now let's go back to their docs. I just kind of go mod tidied already and then moved on. So now let's look at adding open feature SDK. For those that aren't too familiar with go mod and like go mod tidied you define imports at the top of a go file and if you do in your terminal go mod tidy it'll look at all the go files that you have and then try and resolve all the dependencies and download them if you don't have them. So like when I copy all of this over because I'm just gonna copy all of it over really and apparently delete those two lines. It's going to be very angry because I don't have open feature downloaded and set in the go mod file. So let's do that and it can tell that we kind of don't need these two things. Now just kind of pointing some things out. What did I forget to delete? I just want to make sure I'm not. So it only had us deleting these two and it had everything else being added. That's what's missing. Am I crazy? Whoops sorry bouncing around. Oh it's probably not just doing all of it. Maybe that should be updated with their updated in their docs. So let's pop over here really quick. I see Chris asking does this only work with go or does it work with other programming languages? My friend have I got news for you. This was something that I covered pretty early on but they support a lot of things. So .net go, Java node, PHP, Python. They also have like OS specific clients. They support quite a wide range of things. So chances are if you're writing it in one of these languages it's going to be a very clean experience. Theoretically. Don't take my word for it. You need to know I have never done this before. This is kind of the point of the stream. Okay so where were we? My IDE is very angry that open feature is not a resolve dependency. Can't import it. So yet again we do a go mod tidy. Snags open feature. No rust or zig. You're getting me in my ADD brain right now. Open feature. Get hub. Thank you. Do they by chance have an open issue for it? So it looks like there might actually be a rust implementation. Let me actually get through this getting started and then let's actually let's dive into that more. I kind of need to zone in. All right so I did my go mod tidy. Everything's green. Everything is happy and we should actually let's walk through this code now. I didn't actually do that. So open feature. We're creating a new open feature client called the go start app. We're initializing gin as our HTTP server. Hello. We are defining a welcome message or we're trying to get the Boolean value of welcome message from open feature. If welcome message is true then we output new welcome message which is hey this is open feature enabled. Else we just return hello. Add a little bit of flair to it. So before running this I am going to guess that this is going to return open feature ain't working yet because it's trying to connect to the clients trying to connect to an open feature server that does not exist. See how right I am. Computer. There we go. Open feature ain't working yet. Yay. It is predictable. How does this compare to launch darkly or cloud bees feature flags? That's a good question. I have no idea. Like I said I have not messed with open feature. This is one of the things that I have not much experience in. If I had to guess this is just an open source implementation of these. I would also imagine that SDKs. Client side SDKs. So it looks like darkly and cloud bees have very different client targets. And they do have more support for more languages. But I am curious if the open feature docs are just out of date. Because looking here. Yeah. Anyways. Back to this. Okay. So we have an application. Open feature isn't working yet because we have not actually defined any feature flags and we don't have the flag service running yet. So let's do that. So just at a glance it looks like the flag D service expects JSON with flags and then we define a flag name. We define different, I guess states. We define a default state. And then I'm assuming we either enable or disable the flag itself here. Actually let's start with that. So let's create flags. Flag D JSON. Let's copy this. And this is why we need a Docker. Okay. So it's going to run an instance of flag D which is part of open feature itself. And then we pass in. Got it. And we pass in this file. I'll create a new terminal to run that. Oh my god. Am I going to pull a Jeremy where my Docker, my instance of Docker isn't working? I'm going to have to message Jeremy after this. So I think it was one or two episodes ago. Jeremy ran into a similar issue where his instance of Docker was busted and he spent like 20 minutes trying to debug it. And that's what we get to do. Container D is dead. Could just use podman. Okay. So starting flag sync. Starting file path sync. No such file or directory. Oh, that's why. That would be why. We're currently only in CLBO. We need to get to chat loop back off 2024 or four. So now that we run this. Boom. Cool. It mapped the volume incorrectly. It looks like flag D is running and it is running on port 8016. Whether I should expect that or not. Okay. So there's, there is still more that we have to change in the application. So now we pass it flag D and then we set the provider and wait. Do we get to set the location for flag D? Like does it know where to go? Or is it just assuming that it's going to be running on localhost? That is a question I will ask in a second. Okay. So on the plus side in this one, this is the complete main one. So we can just kind of pop this out. But here are the changes. We set the flag D as the open feature provider. And then we just import that. So let's just lazy copy. Get rid of that. Boom. Is that true? Get open feature. Go SDK. Package over here again. There it goes. Okay. So we were missing more things. Did it say to do that and I just missed it? Yeah. Okay. So IDE is angry package open feature provides global access to the open feature. Follow the IDE's instructions and things will be okay. So it looks like that would have worked, but that was an out of date import. Oh, I missed my thing. So theoretically go run main.go. Wow. I just typed main dog. So now theoretically we have our feature flag service running. This is set to try and connect to a new provider. I'm assuming if there is no new provider or if there is no host provided to it, it's going to try and connect locally. Theoretically if it fails, it'll at least log and then die. Might not even get to my drama. Let's give a shot. It got to my drama. Hello feature or open feature is dead. Oh God. Why? So if we look at the logs, starting feature sync, starting file path. Okay. So that's flag D. What does it say over here? Does it say anything? No. Metrics and probes listening at 80 14. Did hold up. Okay. So it is doing 80 13. We do pass that in via pod man. Okay. So that is at least returning something. I'm not worried about networking. Let's see what I missed. Oh, this is again me going a little too fast. So theoretically this is not actually returning the error. This isn't like open feature is broken. This is open feature says no. So if welcome message is true, then yes, if not, no. So let's restart that. Oh, sorry. This is even more fun. Okay. So I killed pod man. I killed flag D. I should have ended that. So now we need to restart. Sorry. Flag D flag D is now running. And now we go over here. And our site is working and then open feature says no. And so theoretically, if we say the default variant is yes, or is on, does flag D notice it? Yes, it does. It has resinked. We refresh. It works. So initial thoughts. Functionally, this is exactly what I am expecting. Functionally, this is good. I am curious and we're going to go look really quick. Okay. This is modifying a JSON file and then flag D just serves like a Boolean yes, no, depending on the feature flag. Is there a better interface to define these feature flags? Question one. Is it always going to be a JSON file or is there some way to have an actual backing data store? Would be my second question. But fundamentally, this is a feature flag, a very, very basic thing. But again, think about Twitter if somehow the like or retweet feature is causing massive issues instead of having to ship an entire version of the application with that disabled, you have that as just a defined feature that you can flip a switch, a figurative switch, and I guess literal switch. And disable those features within Twitter. And then, okay, those are disabled. Now we can try and debug what the problem was without having to make crazy code changes. This is also another way of thinking about it. By defining these types of controls, the software engineers that wrote these features do not need to be contacted if there is a major problem and they need to be disabled. You can give these types of controls to SREs and the, you know, actual support folks so that, oh, there is a massive problem. You can disable it and then give engineering plenty of time to try and work on it without compromising the rest of the application. So we pop back over to docs. I'm curious about is the JSON file the only way to define it or is there another backing data store that you can use? I mean, the JSON file is better than nothing. It's just not as, not the experience that I would want. I'm sorry if I'm scrolling through this a lot. I'm looking for very specific things. Ooh. Document contains auxiliary utilities provided by the SDK. So there's an in-memory provider. I'm assuming that's just to do E2E testing. Evaluation feature contains tests. Okay. So they use Gherkin for their E2E testing. We're going deep. I guess a simple answer to this is to look at flagd. What is flagd written in? It looks like go. What was the name of that file? So flagd flags, flagd JSON. Well, I'm curious. Start help management port, exporter. Okay. So side note, Prometheus can scrape it. That's always good to see. Set a sync provider URI to read data from. This can be a file path. Okay. There we go. That's what I was hoping for. I actually should have thought about that because you set URI. So you can set a URL, whether it's HTTP or GRPC that serves the JSON file. Chris, does the app require a restart after a flag change? No, it does not. And I can demonstrate that really quick now that I kind of scratch that itch a little more. So good example right now. Default variant is off. Let me kind of spin all this back down. And now let me. So I'll spin up my instance of flagd. And then I'll start my application again. Back to my application again. It's going to try and get a Boolean value from open feature. If welcome message is true, it will output. Hello. Welcome to our blah, blah, blah. If false or yeah, if not true, it should just say open feature says no. Right now open feature should say no because the default flag is set to off. But if I set this to on and actually let me show this log here, theoretically when I save this flagd is going to notice that the data has changed and it should log that it's updated. So let's see if I'm right or wrong here. Save. Cool. So it detected a file path event. I'm curious if it does like a periodic sync if you provide an HTTP endpoint. But point is we've updated this to on. So the default now should be on. And if we go over here and force refresh, we have an open feature enabled website. So no, the whole point of this is there is no requirement to I keep saying, you know, ship code, but also there's nothing to restart. It is updating a single field in a essentially a database or data store and then flagd will you know, be the thing serving the application information. I should I should rephrase that. When your website or application goes to do a thing, whether it's rendering a new instance of the page or whether it's attempting to, I don't know, serve up a rest request. In the middle of that it will quickly ping open feature or flagd and go, hey, does this feature exist? Yes, no. Is this feature good? Yes, no. And then it will change its behavior based on that. And so all you're doing is modifying the data serving the feature flag server and then your application, which is curious whether you know something is, you know, good or I should say true or false in the feature flag is just hitting that service. But you're making a single data change over here. I'm probably explaining that poorly. Please let me know if it makes sense. But finding finding that out finding out that you can just serve a provider. So read data from file path URL, or a feature flag custom resource. Wait, can you provide multiple when a flag when flag keys are duplicated across multiple providers, they merge priority. If you're using file path flagd only supports eml.yaml yeah, yeah, JSON extensions. I want to find an example of using, geez, using like a remote data thing cloud native flags with the open feature operator. Is this it? Is this already the thing that I'm looking at? Okay, in tutorial, we're going to see a leverage flagd and open feature operator to enable cloud native self hosted feature flags in your Kubernetes cluster. Open feature operators a Kubernetes flavored solution for adding flagd to any relevant workloads. It parses Kubernetes spec files and adds flagd associated objects to it. Really? Okay, so this is using kind. We have to install cert manager because it is using webhooks for those that don't know internally in a Kubernetes when you're using webhooks, like admission webhooks or mutating webhooks, they do still expect it to be served via HTTPS. It's kind of a fun thing to deal with. But it makes sense. So create a namespace, install the operator, and okay, so then we deploy a workload. Is this just deploying open feature? Oh, I really just wanted to look at it. Okay, so create a new namespace open feature operator creates a CRD. Okay, that's actually going to be a lot more than we have time for. The point is this looks very, very cloud native. This looks baked and made for Kubernetes itself. So for the service, if you're not using kind skip, I might come back to this again. Experiment with open feature. So you can see a fictitious app, the demo UI. This demo will get flag definitions from the CRDs, so set this feature flag. So you would be defining feature flags as CRDs that flag D is then using and then serving internally to your application. Is that right? Have time to do this. It's kind of working because clearly my Docker instance wasn't there. Oh, that's going to be fun to debug later. Crap. Oh, well. I think this might be where we leave it. So let's take a back look, or wow, I can't even speak right now. Let's take a look at our notes again. We did not get to feature flag examples in a while, but we did find out is open feature native Kubernetes support. And in that, it looks like you can have a CRD that defines feature flags and serves to internal apps. In the getting started, what we did, we created a basic go app. We spun up flag D by a pod man. And then what we did was we connected the two with a minor code change. And then lastly, defined the actual feature flags we wanted to serve. And then to do, I want to PR create an issue to update the docs so that's more copyable. The other thing that I want to do is revisit more cloud native example. And then I want to link to this. Cool, cool. I think that's where I'm going to leave it. Because if I go any further, it's going to be more than an hour. And I try not to do that. So thank you, everybody, for joining. Thank you, those that were in chat asking questions. Feel free to continue asking questions. You can always reach out to me. I am GP usually on all the things, including CNCF Slack and Twitter. I think, and I'm actually going to look this up, next week we have a new host. So please be nice to him. It should be Moffy. And he is going to be diving into open telemetry. So it'll be at a slightly different time. I think we're doing an hour earlier to accommodate. But yeah, hopefully you all had fun. Hopefully you learned something. Hopefully some of you are going to actually, you know, mess around with open feature. I actually can think of a couple things that I could personally use it for, believe it or not. That's because I'm weird. And I have a home lab that I do weird things on. So thank you all have a wonderful Thursday. And I will talk to you all later.