 Welcome everybody to the firefly community call. I would say weekly but we have actually just moved to an every other week cadence so welcome to the every other week. Firefly community call. Just a couple of quick announcements and status updates to kick us off here. I will go ahead and start and I wanted to highlight just a couple of things I'll share my screen to show things here real quick. So, new in all of the repos now we have an updated list of maintainers specific to each repo so and the code owners have been updated to reflect these lists as well. So if you open up PR to any of the repos this, if you look in maintainers and be this is the list of people that will be somebody on this list has to do the review and they'll all get notified. There's also, you know, expectations of what maintainers will do and a formal process for becoming a maintainer we are looking for to add more maintainers to each of the projects so if you're interested. This is the guide on how to do that. If you're interested in becoming a maintainer, encourage you to definitely make contributions. And if you're interested in making contributions. You can look in the contributing and be which will now link to our contributors guide, or this is kind of a contributors guide for all of the different firefly repos. How to connect with us finding good first issues, setting up a local development environment, and just kind of the semantics if you're new to, you know, if you're new to hyper ledger or open source, there's some helpful tips here. But I can't recall if I showed this guide in a previous one, but this is a new guide and some new features. In case you haven't seen it, some new features in the CLI that make it a lot easier to do local development on firefly itself and coordinate that with all of the dependencies and that sort of stuff so if you're interested in getting involved, those are some great places to get started. If there's stuff missing there, or you have questions, please reach out to me and be happy to clarify things or add. I'm sure those additional documentation that I just wanted to provide a quick update on that. In other, in other updates, we are in active development on the CLI and supporting multiple ledgers right now so I've been working on fabric support. So crew was actually joined us today is looking at supporting hyper ledger as well so some exciting developments going on there and super excited for the collaboration with other members of the community. So coming soon, not there yet still under heavy development. Also wanted to let Andrew just give a brief update on tokens. Yeah, I can do that. And I will grab the screen share from you just to kind of show a few pictures. So we talked about tokens on one of the very first community called spec the end of June, and we were kind of presenting some of the ideas that we had for tokens and opening that up for discussion on how that would actually be fleshed out. So happily the past couple weeks, I've been able to make that kind of a primary focus, getting down to the nitty gritty of writing up some of the code for these and I just wanted to share kind of where it's up to. Is a slide that we've kind of shared in a lot of presentations and I wanted to point out that we had a few boxes in here that didn't actually exist until now. And now they are very close to actually existing. So we do have one token bridge runtime that we picked an Ethereum based the RC 1155 as the first token bridge we anticipate many, many more because tokens are just hugely flexible components of a blockchain system. And we think there's going to be a lot of different plugins here. But then inside Firefly we have both an asset manager and a token interface that are both these exact things are part of pull request number 154 that's open right now. So if anyone else wants to look at that code you can see how asset manager is shaping up and how the token interface is shaping up and particularly the plugin for this one token bridge that's already created. Just going on to steal another slide that came out of one of Nico's earlier presentations about the architectural layers of Firefly and the plugins. I just wanted to highlight that we've slid in kind of a new full stack here. It's kind of bluish lavender stack. We have a token interface similar to all the other Firefly core interfaces. We have a connector that connects specifically to an HTTPS based tokens plugin and this token could be it could be any type of underlying token as long as it meets this HTTPS API. And then we actually have the first sort of three level deep set of microservices where we have a new repo that implements the ERC 1155 and kind of exposes it as this sort of well known API that I've defined. And then underneath it is actually using ETH connect which ultimately uses Ethereum and an ERC 1155 based contract. So we have this whole stack. And we ended up meeting kind of an extra layer in here. This layer is very similar to the data exchange HTTPS layer, but then it's back by connect. And just the last quick slide I'll show is kind of where we're at all the different pieces that are involved. Mostly what I've been working on but certainly open to other contributors that are interested in tokens, either on this plugin or creating new plugins. There's a new component here called Firefly tokens ERC 1155. This is written in TypeScript because we feel that there's kind of a wide base of TypeScript skills out there that may be able to get contributors on this or others. And then it deploys a Solidity contract that's ultimately deployed onto Ethereum and ETH connect kind of similar to some of our other blockchain work. And then of course the core work for Firefly is underway right now CLI support was just merged as a 0.0.29. And then we have these well defined REST API that each layer. So a lot of different pieces kind of in flight here and it's a very quick overview, but if you're interested in knowing more about any of these pieces or contributing and go in TypeScript and Solidity because I could certainly use experts on all of these to help contribute then please reach out to me or to Nico or on rocket chat on we can see where you can help out. And just to kind of highlight and reinforce something that Andrew said so that we've implicated the decision to move or to introduce a token bridge is important because we, we do not want to provide an authoritative way to do tokens because tokens like everybody has a different way that they're going to want their particular token to work or they want their application to work with tokens so this Firefly tokens ERC 1155 bridge that we are building review as a reference implementation of here's how a token bridge can be built and how it connects to the rest of the Firefly ecosystem. It is not by any means an authoritative implementation on how tokens shall be built. That is completely up to the community or anybody who wants to deploy Firefly and run it and customize it. They can drop in there whatever implementation of bridge they want whatever smart contract for actually executing the token logic on the other side that they want. So we wanted it to be very modular. And that's, that's why that that bridge piece exists there. Yeah, exactly we quickly just just want to say we haven't started on the bridge yet though. Just just the connector. So the piece that does tokens, fungible non fungible tokens, a bridge, which is a piece that we're talking about getting to down the road. I've got tokens in my ecosystem. I want to bridge them to another another way to make sure no one was confused. Yeah, I may have overloaded the terms as well so we may need to connect to versus bridge. So, so in the recording we'll go edit everything I said and just find your place. Thank you Peter good good call out there. This is really exciting stuff it's it's I've been been watching the progress here and it's going to be great to have this have this in it's it's we talked about this like two really big legs of the store which are like before we're kind of together and sort of feeling like we're we're at sort of an MVP level of functionality for Firefly as a whole and and the two big legs there while the token capabilities because they're just so core to the programming of any blockchain application. And then also the fabrics part of this but the the sort of generic on chain logic support. I did want to share that there's also a lot of work happening on that stat thread as well the fabric work is really going great guns. So similar level but Jim's out this week so I think I think there'll be an update on that one in the next, the next one of these sessions. Yeah. Awesome. Alright, I think that is it for updates. So with that Peter I will turn it over to you for our discussion on sequencing. Peter you're on mute I'm not sure if you intended to just mute yourself, or that was accident. Thanks Nico. So we're, we did a session. The date still on the slides, 7th of July. And we talked about event sequencing in Firefly because it's a pretty critical piece of what Firefly, what Firefly does. So we got through the basics, and then we paused and we said we'll come back to the details so the idea today is to go down into the weeds talk about how privacy and sequencing goes together talk about the internal event model, the process that's inside of inside of Firefly because it's actually got quite a lot of a messaging system inside of inside of it. So we're going to go into that level of detail but just in case there's anybody who doesn't have a perfect memory that spans multiple months. I thought it might be worth doing just two minutes at the beginning, catching up on on the background and we'll try and finish this this session at about 22 so lose some time for discussion still. Okay, so sequencing. The, the reason why sequencing is such an important constructs in any multi party system is because it's actually under the covers the building block that makes everything else possible scenarios like tokens work because you've got a shared history of a share and they are massively powerful because they allow not just sophisticated on chain constructs. They, they actually allow those on chain constructs, even really really simple ones like just ordering stuff or more complicated ones like tokens etc to be coordinated with off chain real world enterprise systems. So you can have your, you know, five parties in a network of three as we've got on the screen here, each one of them can have the core systems that they do have right that they've been building building for the last 2040 years depending on how, how big that that organization is that they've, they've got very existing systems of record. They can join a multi party system. And they aren't, you know, that's not be over under any illusions and over this that they're not going to throw away all of what they're doing today, just because they've joined a multi party system but they are going to start doing something differently. And chances are, they're going to get involved in a multi party business process. They're going to start doing, they're going to try and solve a problem in a multi organization way. And it's going to be. It's going to involve some new steps right it's going to involve some agreed data formats being exchanged in the greed sequence agreed decision points whether they're automated or they're human, whether some of those decisions are shared logic some of them are customizable logic is going to be this sort of set. And this is not a new problem this is what any messaging system has been built for for the whole of my career, but all previous messaging systems had this problem. I've got my sequence you've got your sequence we've got no agreed sequence or you have to do really complicated things like compensation logic to deal with when your view of the world disagrees with my view of the world. Many party systems have this this change agent in the middle of the blockchain that means you don't need to do that, we can actually have a shared agreed sequence. And we talked about the fact that there's just the one thing you have to do because it's it's it's the blockchain, you need to process the the events in a way that you understand you're not the only person in the system. So we choose this picture and we talked about the fact that if you think about a money party system like a funnel, lots of parties involved are trying to do things. They know the order in which they're doing them, their system, their firefly is knows that, you know, Bob and Sally here, Bob submitted event one, two and three Sally submitted about one, two and three we used to that being, you know, very deterministic you make three, you submit three messages into your MQ or Kafka or whatever, or you make three rest API calls, you expect them to be processed in the order 123. But in a multi party system, the problem is you're not the only person submitting those so they're being ordered alongside somebody else submitting events at the same time. And because we're decentralized we haven't just got one central one time here, we decentralized that means that those ones have to coordinate. And what happens with fire and what fireflies that are one of the jobs that it solves a building block that provides on which you can build much more sophisticated solutions is that it will take these balls sort of being thrown in and you saw the the fun ball graphic on slide one here where everyone's playground they're throwing the balls into the into the hoop in the top and they're coming out the bottom. It will create one agreed sequence, it will receive the items in order from the multiple parties and it will create one agreed sequence and you can see here that Bob's sequence is in the order that he submitted it 123 and Sally's sequence is in the order that she submitted it 123 but they're intermingled. And when you're thinking about building your business logic, this is hugely powerful because it means that you can both process these in the same order. So the Bob might request something to go into the top of the filter, but it's really quite important that he, his application waits for it to be ordered, and processes all of the things that go before it, before processing it Sally's scenario. If Sally submits ball number one here, immediately assumes that it's the right next thing to do. That's incorrect, because actually, Sally submitted ball number one but actually her application needs to process Bob's ball number two before actually processing hers. And we use the example before of a bid. There's one, I think we use the crate of bananas. There's one crate of bananas. There's a whole bunch of parties in the system bidding on that crate of bananas. And if one of those parties says look, I'm going to submit the highest bid or I'm going to be the first if it's first pass the post I'm going to be the first first one to grab this thing. You can assume that you've grabbed it just because you said grab, but actually you might not win the race. So if you want to know did you grab it you need to check the state after your tent was was was was submitted. And because we decentralized the processing has to happen either on the blockchain is one option shared state very difficult if you've got privacy or everybody has to do the same processing and come up with the same the same answer. So you can run the same logic everywhere. So that's the some way of the problem we're trying to solve the core thing on which, you know, Bitcoin was built on right and the evolution of the of all things since is this concept that everybody can have an agreed sequence. So it's a layer of the logic on top and firefly lets you layer of the logic, deterministically in the chain but it also lets you know that layer of the logic so that it's executed multiple times by different parties off chain. Slightly more detailed picture of the previous, but we went through last time where you've got Bob's Bob's got his firefly node which includes messaging for off chain includes IPFS for storing. It includes broadcast data. It includes a database includes a blob store for for storing data that could be that's, you know, large documents etc. There's the runtime with the API and Bob can submit a green and then a red message. Sally's got exactly the same stack in in her firefly node over here and she submits the blue message and they go into far fly core. But it's very important, but instead of just assuming that because you sent it, it's been sent, you need to wait for the confirmation and the confirmations will come in a deterministic order and the order is sure to be the same. That's for both Bob and Sally red, blue, green. Now, I think a couple of weeks ago, we talked about there was a feature in flight in in firefly, but would allow this API call to block until this event comes back to your application. So, until it's been sequenced and all other events have been admitted that are before that sort of the block to stop your API call make it wait. Let's say it takes five seconds. Your API call will wait for five seconds until this has been sequenced. That's now in. And there is a there has been a bit of a shuffle of the API that maybe we could go through next time, especially when tokens comes in it might be worth us doing another review of the API in an upcoming session just talk through it. But you can now send a message and say confirm in the query parameter and your your that API call will block until it's been sequenced. So that allows a mechanism, a model of programming where you, you submit it you blocked and then you can once that API calls returned, you can go and ask for the state and say, Oh, where are things where are things up to and you'll you'll know that you're you're checking the state after that action was was processed. So that's that's that's one, that's one extra thing that's come in but fundamentally, this is the core model of fireflies that you're, you're submitting, but really we and we talked about event driven programming a couple of sessions ago. You really rather than sort of thinking about blocking the right way to write your application in 90% of cases the right way is you just have a listener in your app that remember when we're talking about we were doing the sort of event driven programming sort of 101 last time you should your app should be structured so that it submits actions. One one task it does. And it says thank you very much. And then it processes events and it's just continually listening to events. That's the best way to write applications in the vast majority of circumstances. Okay, so that was the, that was the catch up. So we're just going to go another level deeper and we're going to lift, lift the, lift the bonnet or hood on the on the engine, and we're going to look at how the internals of firefly structured to kind of make all this possible. And then we're going to talk about how the heck we do it in a privacy preserving way as well. So those of you who sort of are more on the sort of protocol deeper side might might find that last piece pretty pretty interesting. So this is a summary picture of the internals of firefly that the responsible for the last slide true. And it's basically a messaging engine. This piece of the code is probably one of the heaviest lifting pieces of the code, because, so said before, it's the one thing that can only be done in the firefly layer because it's about coordinating together. Things that happen on the blockchain, things that arrive at the messaging things that available IPFS coordinates them all together. And that means that it can't be in any one of those technologies that has to be in the orchestrator. So this is one of the reasons why this is a big piece of heavily heavy lifting in firefly is because fireflies the orchestrator for this sort of on chain off chain coordinated sequencing. So we've got we've got Sally's app here, and Sally does a post of a of a broadcast in this particular scenario. If you don't specify the confirm true to the default mode of operation and the mode of operation I'd encourage for the majority of use cases. You're going to get an acknowledgement with a 202 rather than 201 apologies for the incorrect slide here, you're going to 202 accepted back straight away. As soon as that message has just been written to a local messages table. So there's a database table, and this is a summary of it this isn't meant to be the full database scheme which just enough to give you an idea. And it has a local sequence really critical to understand that this sequence is only useful within my local firefly. And the reason why Sally's firefly needs to keep a record of her sequence is because this firefly needs to make sure that it submits it into the blockchain for global sequencing in the same order that Sally submitted it. It will be a crazy weird system. If when Sally submitted message one two and three, then Sally's messages ended up getting ordered on the blockchain message two one three, right that would just be really hard for any programmer to work against. So firefly needs to be absolutely certain locally, but it knows the order in which messages were submitted by the apps connected to the local firefly. And it does that by putting it into a message table. That's all it is. When you do send. It just goes to a local database table, and that API call is done. It's done. The next step is that the batch processor kicks in. And the reason why we have a batch processor is that from the previous generations and production deployments there we realized that these technologies blockchain and even more so technologies like IPFS are not designed to create very high throughput. By, by submitting lots and lots of individual things sequentially, particularly something like IPFS if you, if you take tiny packages of like a kilobyte of data, and you submit a thousand of them. That will take time acts of some number of milliseconds. If you submit one 1000 kilobytes piece of data to IPFS. It's not going to take the same amount of time it's actually going to take. Maybe only. It's maybe only going to take a tiny fraction of the time it would take to submit all of those individually, because they have to be propagated to everybody everybody has to go and download each of these. But you're being very inefficient in how you use the technologies. So firefly comes with a batch processor built built in and the batch processor listens using a database layer listener. There's a sort of detailed point about where we are on the journey to an active active runtime for for firefly but this is all designed with active active in mind even though we actually only support single runtime at the moment for each firefly node. The batch processors gets a tap on the shoulder to say hey look there's more messages in the messages table, and it's constantly trying to aggregate those together. There are options on here like how long do you want to wait and how what's the maximum number of messages you'll put into an individual batch so maybe wait for up to half a second to fill up a batch so delay any individual message by up to half a second to allow friends to join it and allow maybe 500 to join to join it. So that will be an example of sort of configuration on the batch processor, can't remember what the defaults are but it's in the order of what I just just described. And it spins around assembling a batch and the batches are on the rest API so you can see the batches and it updates the individual messages with the ID of the batch that they've been popped inside of. So that's, that's what happens when you're globally sequencing none of this happens if you're just doing completely unpins private sense to somebody. But when it's being pinned to the blockchain, we call it a batch pin transaction, it gets assembled along with a bunch of its friends into a batch, each of these gets updated with a batch, a batch. And then, once the batch is ready for dispatch, it gets what we call sealed. So all of the, all of the hashes except you get generated so it can't be changed from that point on. And, and then it gets written the full data of the batch gets written to to storage or sent across private messaging. So the blockchain gets a tiny transaction, a teeny tiny transaction which just got enough proof that you can tie it back to the batch. And that, and that's the key thing is that the blockchain is the sequencer. So the blockchain has to put it in the right order, the block changes not need to contain the data. So the blockchain has, this is the batch in this order, and IPFS or private off chain exchange sends that batch across. Okay, so that's what send looks like. It's very simple goes into a local database table gets assembled into a batch, and then gets sent to public storage and blockchain that's actually the easy bit. Why, that's the easy bit. The exciting bit is that now this blockchain transaction is being ordered with all kinds of other transactions going to the system from all kinds of parties. And the data is going to arrive at a completely different time. If it's IPFS, then it should be there and you have to sort of suck it down. If it's data exchange is going to be in a messaging system right it's going to be flowing asynchronously at the right, at the right time to get across to you. So what we actually need is we now need an aggregator. I don't have time to go into all the real nitty gritty of how this works this is actually quite a lot of code inside of Firefly, but the aggregator one gets started, basically. This job in the world is to listen for to all of the arrivals that could happen and work out if that caused something that wasn't complete to be complete. And the reason I was choosing all those words is the blockchain stuff can arrive at any time. The data exchange data can arrive at any time. The public storage data needs to be sucked in. So the way that it actually works is when blockchain events come in. There's a listener that's part of the connector for that particular blockchain so if connectors the connector for a ferium fab connectors the connector for for fabric so separate run times just like the token connector that we're talking about earlier today earlier for Andrew. The blockchain connector was just listening reliably for events, but it needs to it needs to accept those events as they come in and just say yes yes yes yes yes yes yes. So all the blockchain connector does is write those into what we call the pins table, and we'll talk a little bit about what pin is in the in the next in the next slide, it writes it to the pins table. And the inbound data aggregator is listening to the pins table, as well as listening for private data, and the data aggregators responsible for maintaining basically this dispatched column on the pins table, and it uses this to say like something that comes in a new blockchain piece comes in. Is it complete. Do I have all of the data. What sequence is this on and we've talked in the past about this sort of subsequences these topics where you can say, you know, block the world on this particular topic, if you know you're missing an event but don't but don't block others. It fills these up and it they go from sort of false to true. And as soon as they goes from false to true on one, which can only happen if all the data is available. And there's no previous pins, which are in false state, they're not complete on the same sequence. The inbound data aggregator just writes a new event to an event table. So it's very database coupled. And that's part that's, there's a couple of reasons why this is so database coupled one is the high availability model because it means we can run active active run times on top of the database. The other is that unlike many messaging systems, the goal of firefly is, you can query all state from time zero at any point. So these, these, the fact that you sent all of these messages, the events themselves that are coming in. They, they are all written in perpetuity that they're never, they're never deleted. And that's very different to a traditional Kafka MQ messaging system which is like maintaining a little really super fast buffer to just transport things in flight. And that's about a complete history of time. So that's why we use, we use a database here and we try to be sufficient as we can and I'm sure there'll be more optimization in the coming months on that usage of the database. So you then got the events table and the events. There's only one event, even if you've got multiple applications listening to those to those, you know, those yellow balls that were coming in from Sally, and the blue ones coming in from from Bob. You might have five applications all connected to the same firefly that want to listen. There's still only one event that gets written into the events table, otherwise it would be very inefficient on the database. And the events are just a pointer back to the messages and the data. So we don't write the big stuff multiple times but we do need these little records of here's a sequenced event. And the absolute truth in firefly of the sequence of events is this table. Because remember, we talked about they go in. They get sequenced and they come back. This wire here when they've been sequenced and it's going to be the same on both sides. That's the events table. So the last piece of the puzzle here, and we may we may pause at the end of this and see if for questions and see if we want to come on to the privacy perversing preserving peace because it's even more. It's a little bit more complicated than this. The subscription manager is the last piece, and the subscription manager is responsible for starting and stopping what are called event dispatchers. And this is because you might have multiple applications interested in the same message. Same event. A very obvious example of this is if I've got a copy of the Firefly Explorer running. And you've got a copy of the Firefly Explorer running on your laptop. We're both developing against the same Firefly server. And obviously, we both need a copy of the message. It's not going to be very useful if, if like my UI updates one time and then randomly yours UI updates another time, and we're seeing different updates that seems really, really weird. What's exactly the same for applications. If you've got, if you've got multiple instances of the same application you might want messages to be workload balance between them. You might want each one of those applications to get a copy of the, of the, of the message and we talked about durable versus non durable or named versus a federal applications in the previous talk. So what the subscription manager does is it maintains these event dispatchers, and the event dispatchers are started when the applications listening. So if you've got an application when it connects across a webhook, that connection goes into Firefly, the subscription manager kicks off an event dispatcher for your particular connection on the webhook. If you've got an application that wants just one copy of the message that might open multiple web sockets into into Firefly. Then there's a leader election that happens so that only one event dispatcher will gets will gets get started and and that becomes the leader and all messages will go to that one application instance. That that connection disconnects and it goes to the other for web hooks which is a push model is one event dispatcher for all webhooks and it's just receiving them and then pushing them to as many webhooks as as a needed. So that's the event dispatchers in terms of how it works inside of the databases. We don't have lots of copies of the events, the events exist exactly once, but the events just like the messages have a local sequence that we know what order very inside of the database. So all we need to maintain in each subscription is a pointer and offset really really well established pattern in messaging. We can just maintain an offset where a particular subscription knows where it's up to in the sequence of events subscriptions can also have filters applied. And that's fine. All it does is you just go through these messages one by one by one saying doesn't match doesn't match if it does match. There needs to be an event dispatcher ready to process it. If it, if it doesn't match it's just we just move the pointer forwards, regardless. So that's, so this is kind of like the messaging engine of of firefly in a in a nutshell. This piece is the more complicated piece because it's about assembling dealing with all of the different paradigms for how subscribers need to work against against an application. And it's got to aggregate together multiple sources of data on the sending side, the more complicated bit is the batching, which is how we're able to get a higher throughputs out of IPFS more than anything else for broadcast, but also messaging systems and blockchains as well.