 Welcome everyone to the weekly firefly community call. Just as a reminder, this is a public call and it is being recorded. So I just wanted to let everyone know about that today for our agenda for the first part of our call. Hayden Fuss will be presenting on deploying firefly and lots of cool DevOps related things there. So thank you Hayden for the slides you put together. And then, as always, the second half of our call will be just open discussion on whatever topics folks want to talk about. So I would know further ado I will hand it over to Hayden. Thank you Nika. Yeah. So, as Nika said, I'm Hayden Puss. I'm a site reliability engineer here at Colorado. And so I've been working on deploying firefly in various clouds recently. And so I'm here to share some of what we've learned and more just show up what we have so far. So, really excited. There we go. Sorry about that. So today what I'm going to be showing you are discussing first is just the challenges of deploying firefly because it's a multi party system and kind of the inherent challenges that come from that. I'll cover that first and then I'll introduce some cloud native technologies that help make the deployment process of firefly a bit smoother in terms of how we can create a consistent experience for various folks who want to play firefly. And then, and then I will get into a demo of actually deploying it on to Kubernetes, which will be most of the session so without further ado. So what is deploying firefly involved. If you recall from previous architecture discussions and other community calls. Firefly consists of a core that is the actual engine and sort of orchestrator of the system but then it has lots of infrastructure runtime components that it requires so there's the blockchain interface, private messaging which is fronted by a data exchange connector. And then you also have public storage that's required as well as a database. And so with all the, with the deploying firefly itself is relatively straightforward but it's the infrastructure around it that can actually be the more challenging part. And so just to reiterate, when you're deploying firefly you need a database you need a blockchain node, which is fronted by a connector, you need peer to peer storage and you need a data exchange. And so when you're using the firefly CLI which I highly encourage, if you're, you know, wanting to develop against firefly, go check it out it's, it's great. It does this all for you using Docker compose. But what about in cloud environments is is kind of where I want to take this obviously and then zooming out a little bit and thinking about firefly in more production scenarios with lots of members in a multi party system. So everyone in every member in the network would need their own firefly node, and additionally it will need to have that surrounding infrastructure that I just described. And so, and what's really important to highlight then is that components such as blockchain, the storage layer and the data exchange are all peer to peer and are requiring network connectivity between all the parties and that's a very difficult problem to solve in a multi party system This is just framing it for and then sorry the last, the last challenge we kind of face as part of this as well is each of these members could potentially be running in different cloud environments, whether it's private cloud on Prem, AWS, Azure, Google Cloud, etc. And so it's difficult to then provide a, or it can be difficult to provide a consistent deployment experience for firefly as a result of that and so that is kind of bleeding leading right into this idea of cloud native and how, as we're developing firefly actively we're also keeping it in mind to immediately have it be cloud native or as cloud native as we can and iterate from there. And what I mean when I say that I got a good quote from the red hat folks on this one is when an app is cloud native it's it's kind of what we highlighted in the last challenge is, is designed to be fried a consistent development and automated management experience across these various clouds. And so, while there might be folks who could argue that there are other container orchestrators out there the industry solution that seems to become emerging, or as pretty much emerge as a standard to solve this problem and create help create a consistent experience for cloud native apps is Kubernetes. And so, just in case you're not familiar, I'll give an overview of Kubernetes in 60 seconds, or 60 seconds ish, and then we'll get into the actual demo and some of, and you know kind of show off what we've been able to do in terms of making sure that there is a cloud native app so far. So, Kubernetes in 60 seconds. It's an open source platform for managing containerized workloads at scale. The idea is that you're deploying Docker containers across various BMS and you're needing to, you know, have a layer that's orchestrating all that and managing the between all the stock containers. And so, just kind of like how Firefly is aimed at attracting away various pieces of a multi party system like the blockchain and a few others. Kubernetes is aimed at abstracting away the underlying cloud infrastructure so you want to have the networking in between all these BMS, as well as the load bouncing in front of them, abstracted away as well as the persistent storage and, and it's you know, emerged as an industry standard for doing exactly that and it's achieved it to these sort of pluggable interfaces known as controllers, and these controllers are continually reconciling any resource you give to a Kubernetes cluster and ensuring that the state that you've asked for is actually what's getting made and so that's how it ensures containers are in the right state the storage that they might require is in the right state the load balancers etc. So essentially to give you kind of an idea of what the interface when you interact with Kubernetes is like and if you're not familiar, it's a declarative configuration, and it's aimed, and it provides, like I said that a continual reconciliation is what it limits the it's doing it for you. So, on the right what you're looking at is an example of defining a Kubernetes job. All it's going to do is spin up a Docker container using a particular Docker image, and it'll just, you know, print hello Kubernetes and sleep for a few seconds before wrapping up Kubernetes would take this job, see that the container gets deployed to a VM out in the cloud and ensure that the container either completes or if it goes into a failure mode it would you know gracefully back off on retrying until it gives up basically. So, that was Kubernetes in 60 seconds ish. I hope that was helpful if you're not familiar. I think it'll be, you know, obviously more helpful to see some real examples at this point so what we're going to show for the demo today is we're going to deploy a two member network flyer find network onto a local Kubernetes cluster that I have on my laptop. And this, what will actually get deployed to the Kubernetes cluster is firefly the firefly data exchange that's using a GPS with mtls and and Postgres, we're going to use a package manager for Kubernetes called helm to do so. And this is with a, what's called a home chart that's in the process of getting into the firefly repo, you know, this week, as we're talking so, and then I eat connect and IPFS, the blockchain and storage layer are going to be remote and they're going to be a little bit easier and like I was saying since they're peer to peer. There can be challenges with with running that locally a little bit easily so that's an overview of the demo and I'll get into that now. So wanted to first start. Sorry, just fighting. Yeah, so, so like I said, the first thing is if you were using the firefly CLI. I'm just using my Docker UI to kind of show all the containers that would get spun up in a two party network as part of the firefly CLI, you know, it's a very similar idea and so we're going to be doing the exact same thing basically but using Kubernetes and then you know some pieces of it will actually be remote. And so what I have running on my laptop is a is a tool called kind. It's aimed at Kubernetes and Docker is what it stands for and it's aimed at making it very easy to run clusters locally so an entire Kubernetes cluster is actually running in just this one container right here. And so if I do kind get clusters, you'll see I have one made called the kind cluster and that's what I'm actually logged into right now. And so what I've gotten it done in advance is the clusters already up and it's got a controller that will help manage automate certificate management which is needed for the MTLS layer and the date for data exchange. And otherwise it's just the vanilla installation of kind that comes right out of the box. So, Hey, just to clarify, yeah, you're demoing how to run firefly not on your laptop by showing it running on your laptop. Exactly. Yeah, yeah, yeah, I'm trying. Yep, yep. Yeah, the whole idea is by because of the packaging that we're using for Kubernetes this home chart that I'll show up. Yeah, we'll be able to move it from the cloud on my laptop to the cloud. Exactly. Yeah, I'm just running the cloud on your machine. Yeah, yeah, exactly. Yeah, makes sense. Okay. Yeah, no thanks for pointing that out. So, so yeah, like I said, I got the Kubernetes cluster up. We're good there what I wanted to show off is kind of what I met by home chart and a little bit of that so just like that example Kubernetes job I was showing you. So all declarative configuration for Kubernetes what ends up kind of being in order to write a cloud native app for it, you end up writing a lot of the animal. And, and so what then becomes a challenge is distributing the animal templating the animal so that it's configurable across various environments across various clouds which might have a few, you know, smaller details that need to be changed between them and things like that. And so, Helm is kind of like the RPM or MPM of Kubernetes package manager for Kubernetes it sort of provides the structure of how you format your YAML and how you package it up as a tar and then distribute it out so that others can reuse it. So we're in the process of writing a home chart so that it can be shared via Helm repo and folks could pull that down and reuse that to deploy Firefly and the data exchange kind of ala cart and configure it however they need for whether it's a post press they have running already in their whether they're using blockchain it's provided by Colorado in the SAS or their own blockchain notes. You know you name it we want it to be as configurable as possible and you know you own it as much as you can. So, and that's what this ultimately facilitates without getting into the weeds of all all the the YAML and code that has been written to make that happen. That is at a high level what what it's ultimately doing and so it's a lot like a Docker compose file if you look at the Docker compose that gets made by the Firefly CLI, but it's just, you know, cloud native is the whole point. So, that was a quick overview of the home chart itself that I'm about to show off. And then, yeah, and so without further ado, we'll get into actually deploying it so right now if I asked the cluster that I'm running in for the pods, or if it has any pods there currently none. And I'm going to what what I have what you do when you deploy a home chart is you basically provide what are called values to it, you know YAML config that are that is being used to template the rest of it. So, I have two values files one for one member called ACME another for another member called Randalls. And they're both have enough config defined for them to be able to connect or to spin up their own data exchange to spin up their own Firefly node. And then they're connecting out to a remote ETH connect and IPFS that's being hosted quite a little show off once once the Fireflies are up themselves. We have this demo dev chart written that could actually run IPFS and ETH connect locally or in Kubernetes for you as well. But like I said network connectivity and a few other pieces running blockchain itself, it's easier to sometimes just use something remote. So, yeah, so what I will first do is create what's called a helm installation I'm going to deploy the chart and create an instance of it which is called a release. And so the first release is going to use the values file for the ACME member that will stand up a Firefly node for ACME. And then we're going to create another release for the Randalls member and we'll see some Firefly containers get spun up too. So this is the interface into helmets just to go CLI, just like Kubernetes and Firefly CLI as well. And so what we're doing is an installation of it we're using the chart I was just showing off Firefly dev chart, we're providing the ACME values for that particular org, and we're creating a release called ACME Firefly that will spin everything up. So helm is noticing that a release hat doesn't exist. It's creating that and now if I do cube CTL get pods and I watch what we're going to see is a few containers kind of spinning getting going so Firefly is currently crashing only just because post postpress and Firefly DX are still coming up. Firefly DX is getting its MTL certificate through certificate manager which is a feature of the chart that we've written. And it's a controller that kind of helps automate a certificate authority within Kubernetes for you. And so once both now at Firefly DX and Postpress are up we're going to see Firefly come up shortly. And then we have a job container that is going to auto register the Firefly member into the network. So that if another member joins it can it can discover this Firefly node. So this should get going. This should finish up shortly. And as I said just to kind of show up what's at the blockchain layer. In the collider sass I've already beforehand stood up both IPFS for ACME and Randalls as well as a go Ethereum node for ACME and Randalls and then using the smart contract management feature of fire of collider I've deployed the Firefly payment and Firefly smart contracts into those blockchains. And so all that's preloaded in advance. And would be, you know, something to work towards and actually having the maybe automated as part of some of the home charts or things like that going forward. So the auto registration is complete for ACME. So we're going to now install Randalls. And then we will show off kind of the actual network that's been made and running and we'll send a transaction over and that'll wrap up. So just like the, oh, and I think it's worthwhile just to show off helm. And you can list releases and installations so I've done a helm LS and you can see this ACME Firefly release has been made. And that is what ultimately created all these Kubernetes resources that we just saw it's fun. So what I'm going to do next is install Randall, and we'll do the same thing again and just watch for it to come up. Or actually, I think we've already seen that before that's not as useful. What we can go ahead and do in the meantime is while Randalls is getting up will start forwarding the ACME Firefly node so that we can connect to its explorer and I can show that off in the meantime. Kubernetes allows you to connect to connect containers if you need to using what's called port forward. There's also other ways of actually exposing them for real external traffic, but we won't get into that right now. But so I'm just port forwarding ACME locally, or the ACME Firefly node that I stood up its API to my local port 5000 and that will also enable me to then go to its explorer. So if we go to the explorer, we've seen that there's one member in the network currently. And if we go into the Firefly system. Name space within Firefly we can see that some messages have been exchanged for registration registering the node. But otherwise there's still not another member yet. And if we list pods and a few namespace again sorry. We'll be able to see that yeah Randalls is now up and it's still registering. And once it's done registering we should see that to, we can now see two members are there. And while there's no messages in the default namespace if we go into the Firefly system there's now been some additional messages and transactions to register the second member as well. So if the other members up, we can see that the auto registration has completed. What I'm going to do next is then port forward Randalls, and then I will show using one of our Firefly sample apps, doing a broadcast over the two member network. We'll kind of explore that in the Firefly UI. I'll actually take the transaction hats and show that in the actual blockchain layer that's hosted in collido. And that will at that point conclude the demo. Yep, so the next step is to port forward Randalls and this one will listen on my local host 5001. And so now if I go back to my browser and refresh this page. I'm also now I'm now connected to the Randalls Firefly Explorer and can see the same thing on its side with in terms of members. And we haven't exchanged anything in the actual default namespace yet. And so that's the next step. So, in, in my other terminal while I have some for forwards going. I'm using. We have a Firefly samples repo in the hyperledger community. There's two applications. One is an actual UI react app. The other is a CLI and so today I'm just going to use the CLI to quickly send a broadcast from one of the members to the other. So the CLI by default assumes that you have two fireflies running on 5000 and 5000 one, and it's going to send broadcast from 5000 to 5000 one. So we should see a broadcast from ACME to Randalls using the CLI. So by just doing MPM start, we'll get we'll see that we're broadcasting data values hello and world via Firefly one which is the ACME note. So what we're seeing on the port forward from Kubernetes, you know, some requests and connections are being made. Just kind of nice and response. And then we actually see that we receive the data on the other end from the other Firefly member. And then what was printed here was a record of the actual transaction that was made on the blockchain as a result of that. So if I take this transaction hash. Go into client is data explorer. For transactions for the blockchain as that stood up. I can actually then search for that transaction. And we can look into it and so you can even see from there. Sorry, there actually knows this is right there. This was from the ACME one of the ACME note wallets, and it went to them the Firefly contract as a result. And so, you know, again, none of this data, the actual hello world was ever put on the blockchain itself only hashes and the transactions. Yeah, only hashes of the data was actually what was put on the chain. So, and you can see what method was called within the Firefly contract that facilitated that. And, and, yeah, you can see the hash data as well. And so, you know, that was a, we could even reverse it if we wanted to where we send it back and forth or send it from one member to the other but I think the last thing to show would just be to show that that popped up in both data explorers so you know we're in the default namespace on the ACME Firefly node. We see the broadcast that have been sent. And we can see the same thing on the Randalls node as well in the messages. So, that was getting Firefly working on Kubernetes locally and so, like I said to wrap up here and emphasize what this home chart would then allow is, you know, if you have your own Kubernetes cluster whether it's running in EKS or on-prem, and you have, you know, some of the infrastructure may be already pre-provisioned that would need it to be around it, you could take this chart, provide some config to it, and then deploy it out and get your own Firefly network running as well. And so, yeah, I think that that does it. I really appreciate folks giving me the time. I hope you enjoyed it. Thank you, Hayden. Appreciate the work you put into making the chart. That's great.