 Hey everybody, welcome We will get started here in just a few minutes. I need a minute to round up some maintainers here We are supposed to have a vote on this call and we need some people to vote All right, I found some of them. Hey Enrique, are you good to demo the thing that we Chatted about webhooks and retries and stuff. Yeah, I'm good to demo. Okay, cool. We've all Yeah, a little bit of a late start today, but we'll take that off in just a minute. All right Go ahead and get started. Thanks everyone for attending today's firefly community call. I'll go ahead and share my screen here real quick. And I'm going to briefly go over the agenda All right, as all hyperledger meetings If you haven't had a chance to read it, please take a look at the hyperledgery antitrust policy and code of conduct This meeting will be held in accordance with those This is being recorded and the recording will be made publicly available on this page after the meeting So if you want if you miss the meeting and want to go back or if somebody else want I guess if if you're here, you won't miss the meeting but if somebody else wants to go back, they can see the recording of it Today's agenda items. We have a vote to bring on some new maintainers We have a demo for some new features that have been merged into mainline and we have some Some architectural changes that we wanted to talk about that Include some significant performance improvements as well. So that is the agenda for today's call We're going to start today's call with a vote. So last Community call there was a motion to bring on Chung and Matthew as maintainers. They have been Contributing to hyperledger firefly for quite a while now and made some very meaningful contributions in EVM connect in firefly core and other places in the project and Have really demonstrated a Great great thought leadership in the project A strong grasp of the architecture and have really dove Uh deeply into some of the the most complex code in the project to improve it and so I I put forward their names last week as as recommendations to add them to the maintainers list and We'll just the the process says that we need to vote on the next community call So we'll just take a quick vote. Uh, if you guys just want to put in chat, uh, either either yes or no to bring on I guess, uh We'll vote on them separately. So yes for each one and no for each one in the chat And I do believe we have a quorum of the maintainers here on the call. So we'll be able to Wrap this up right here now Just another minute here. We're uh Working through some zoom logistics So there's a few us just having to be the same but we're here apologies. So this is peter here So I think there's a motion to add new members of the community And we're um always that first motion to how do we like we're courting somewhere that there's uh That there's an agreement. We're not just doing yeas and nays on the call There's going to be somewhere we're recording that the call is reported super. Okay. So it's a yay or nay from the maintainers So just go down the list of Maintainers Chat but we can do verbal if verbal is very To verbal that doesn't matter to me either way. Yeah, so do you want to call out? So what? Who's going to motion for for this Yes, I I motion to break on matthew and chung as maintainers to hydroelectric firefly great anyone second In seconds. Yeah, okay. Cool. So now i'm going to do a roll call You're these okay, so when the rest is so we're all calling through a list of maintainers Yes, so where's the list of maintenance? Listen maintainers is this is good practice. Yeah Um, something tainers is this is the quality content that people join the firefly i'm using bark A resident tsc members Do this all the time maintainers in the Okay, first one peter how do you vote? Yeah, okay second niko. How do you vote? Yes for both third? Andrew, how do you vote? Yes for both great. So i zoom peter your yeah, what is for both? Last one alex Okay, great So this is the maintainer at according to firefly We don't care about the other maintainers or total owners from other repositories, right? uh, I I believe we so This is a I believe we do because there are other maintainers of other uh Of other repos within the the larger part of each other repos should I also sorry just to check so So firefly is everything. Yeah, so we're talking about maintainers of the core repo Inside of firefly was the specific I think maintainers is a project wide title. Okay, and co-owners is a repository Yes, we're talking about maintainers voting on the base And that list is uh, the question From me to you guys is Is the list of maintainers for the whole project recorded in one place or is that? Irrigation from local places. I believe they're I thought there was a github group A github team that had all of them in there, but I Now that i'm looking at it. It looks like there are For instance, matthew's already in the group that I thought was was the group. Okay So how about for this? Um, I so I think we should just we should just partner Um, if that's I apologize. I feel like I've disrupted the meeting. I didn't mean to um I think we can very easily complete this in discord. We've got a set of yeas and set of maintainers we Really positive thing and I've disrupted through this procedure. I apologize for that um And we'll just close out discord with what the list is where the list is at the maintainers that's being updated And how that's going to get updated. So that sounds like a plan sounds good So I think uh, we'll continue the motion, but the list will be decided in this quarter. Is that right? I think it's just too disruptive to not have the list known right now. What I heard was When voting on adding people to a list of five y maintenance Yes, we're not a hundred percent confident. We have a list Of five lie containers because five lies A microservice architecture with lots of repositories With very very clear on the process for how you become a maintainer on an individual repository I would say coder on those. Yeah coder on all those repositories But um, but a little bit of lack of clarity just here right now on the call But it's disrupting the call. I want to just get past the swaps and call Um of what the list is that we're adding discord didn't probably the place that has the best list of project Why pair of client maintainers that is maintained by hyper legend? So Discord may be the appropriate place for multiple reasons to finish okay So we just want to make sure We're going to decide who the maintainer list the current maintainers are I think we're going to continue the conversation and then And then decide because are we going to resume the the motion the second we need to make sure that's a complete process Yeah So we're we're starting the current motion. Is that right? I Yes, I believe the proposal is to table the current motion Until we have the full list Apologies for the 10 minutes we spent Let's get on to the interesting stuff on the call And jim well jim will work with me to help work out the procedural side Yeah, what we think is a very straightforward motion that we should be able to this is part of the procedure We're we're basically saying the current motion is is discarded Okay, so we need to put a disposition to a current motion It's discarded and it's going to be restarted in discord Okay Apologies, this is why I'm not in politics All right now moving on to the exciting topics Uh and rike, I believe you have a demo that you wanted to show on the call Yeah, I'll go ahead and share my screen Hopefully you can see it All right, so I think over the past few weeks We've added some enhancements for the webhooks. Um, I sat here a month ago in the call and we showed the mtls configuration for webhooks and now we've added The two things One is we've exposed a set of configurations that allows you to Configure retries for the htp requests for webhooks for the transfer layer preventing as far as your subscription And we've also exposed a set of htp configurations and options for timeouts Etc. I'll show in a second And then the final thing as well that we added was the ability to batch events So this is a pr that's currently just on Open just a bit of documentation, but the code is already in And the idea is that you will be able to As part of one htp call For the webhooks who will be able to batch a set of events As an array instead of having one htp request For each event that gets produced from the firefly nodes And with that it's just a boolean that you set the true as far as your subscription And then there's also a timeout that allows you to configure Hey, the batch hasn't been filled in the last x time three five six seconds Then just send regardless one two n events that you've configured So let me quickly demo that. So I have a firefly instance running locally Let me just quickly close this A firefly instance running locally And I've just created a subscription So let me just get the subscriptions again This is quite small. So let me know if you want me to zoom in maybe a little bit So create a subscription called demo call transpose webhooks And inside the options have enabled batching to be true. I've enabled the timeout to be five seconds I've also enabled a set of htp connections options So connection timeout I expect to continue timeout. Don't worry. These are all documented As part of the configuration section in the subscription as well. So you can go there for more detail And then the thing that configures the batch size and this is I guess from legacy Point of view is the read ahead So the read ahead will specify What is the amount of events that we want to send the max amount of events that we want to send in one htp call So this would be again an array of events For the events that produce a firefly The other thing about it is a retry section as well. So if for example, if there is a Deadline context that line or there's a timeout problem or my service is now available I will try five times The htp call And you can specify here an initial delay So the first time I fail all all this day of five Then the max delay will be a minute and this will just back off Exponentially increase until the max delay and it will try five times And I just have a dummy express server running locally For 3000 that just outputs to adjacent payload to the console that it gets So, um, so this is the the node.js app. So let me Remove it for a second. So it's so it's not running I then have my vskill terminal running one of the firefight nodes and then the other firefight nodes are running in docker and let me go ahead and Just execute a functional contract. This is just a dummy contract that has an owner function That it will just return me the owner of this of this contract So if I go and execute that and I go back to my logs, you should see here. Let me zoom in a little more Um, a retry happening. So there are retries. So you'll see why there too So because there's a confirmation event right and transaction events are two events being produced and you can see a retry is happening here if I now go ahead and Run that app you'll straight away see the events and because of the batch in timeouts and because of the retry you might get only a single event Per call depending on how many requests you've done basically so and if I go ahead and And query it multiple times. Let's say loads of people are concurrently Executing different contracts firing loads of events if I click on it a few times and this will execute loads of events, right? You should see a set of batch So I'm going to have to scroll up a little bit because I've clicked it a bit way too much But hopefully you can see this So let's just get this array. For example, you'll see an event sequence 3 1 3 Which is quite large event 3 3 1 4 another event here. Let me zoom in a little bit 3 1 4 3 1 5 and so on until you reach 3 117 Which is 5 and that will close the array and then we'll keep going So it might be sometimes that depending on the timeout you might we might say between 1 and 5 if the batch hasn't filled Um, so yeah, that's that's the idea You can now have more fine grained control of your webhooks From an htp point of view with retries and options and also the ability to batch so you're not firing Thousands of requests one for each event Um, yeah, that's about it. Any questions any comments? I guess from anyone. I think peter really helped me on this feature So there's anything I missed feel free to intervene and any questions from anyone happy to answer So the read ahead option is the Maximum number of events that may end up in a batch Yes, great That makes sense any other questions on the demo. All right. Thank you and we can appreciate it. Thank you I believe the next item on the agenda is Postgres SQL for ebm connect and performance enhancements peter. Did you um, did you want to talk about that today? Yes, so, um There's there's been a lot of Work over the last I guess between the last call and this one On the internal core, um, it's all part of We I think quite an exciting release 1.3 when it comes out. There's a lot inside of there I think andrew's been talking about on previous calls changes around how to pin private data and our broadcast data to um to any kind of smart contract invoke changes around the token connectors um There's then also another round which I'm very involved in of just the internal guts of the engine the most important bits that were on the scale in production for many projects today um taking a step forwards and um The two things I want to talk about here today One is related to higher data density Has a performance side effect and the other is really related to to performance The first is that the blockchain connector. So you've got firefly. It's got core, which is the Scalable engine that has the API surface area the event plus the side event And then that one engine can talk to lots of different blockchains across lots of different namespaces That each one of the blockchains you're talking to you have a connector to the block check And we have a connector building toolkit Which has a get a repo inside of there called firefly transaction manager, which is the one I'm going to talk about most here And then and there's actually a few generations of connectors But the latest generation of connectors are all built on this firefly transaction manager toolkit That transaction manager toolkit that for example the latest generation ethereum connector evn connect is based on um has to reliably guide transactions onto the blockchain and Reliably detect events from the blockchain For that reason There these connectors need to have state. They need to be stateful They need to track that data over time and until just recently Well, if I call when it was storing data We're storing all of its data in a pluggable database. So postgres is the most popular used Today the pluggable database, which means that the runtime installed is stateless Gives it a path towards active active, which is another thing we're working on for 1.3. So you can have very efficient failover If you're doing a cloud environment failover between availability zones And you can reconnect to your database and continue where you left off The the ethereum connector and the firefly transaction manager toolkit. However, we're not using the database They were using a very simple efficient oil based store called level dmin It's very popular in the blockchain space Because a lot of the blockchain technologies themselves build on top of level db or technologies like wax db that are very similar To level db. It's just the way to have a key value store on the far system It's got a couple of challenges though It's a file system So if you want to make it highly available, you have to make the file system highly available That's very challenging for anyone who does cloud based appliance The other problem is it doesn't support rich query So if you want to do things like please find the last 500 transactions a week ago That are still waiting to be submitted to a chain Um, but uh on their way through dealing with some complicated gas management thing with some public chain That's a very hard query to construct in the database. You need a lot of code basically on top of a key value store To implement each of the each of the queries So so the piece of work that's happened is to take um, the transaction object Which is stored inside of the connectors inside of the fireplace transaction manager base toolkit And all of the connectors that build on top of that which includes the evm connector to take this data structure And all of the other data structures like the list of event streams the checkpoints for the event streams And the and the live and instead of storing them only having the option as level db local file system to provide an alternative option of a database And it is only at the moment the the function that's portable postgres because that's the primary one that we see people using So I'm going to focus in this session Not on all of those other encillary bits of data I'm going to focus on the transaction object because it's the most interesting one in the system The transaction objects and this is actually a level db, um api, but I'm calling against and this is um So I work in blighter we have an enterprise stat that sits on top of the enterprise source So if you see some some things in the url, which aren't obvious from the from the from just the open source So where you're getting to that it's because you're getting to it for one of those enterprise stacks But this this payload you're looking for looking at in front of you is the open source payload from firefly transaction manager in level db today And um, you'll see every transaction has an id Okay, now this is not the same as the transaction hash of the transaction on the blockchain Because the whole point of this connector is to take thousands of api calls coming in to currently Accept them and then drip feed them onto a blockchain. Maybe the blockchain can only do 100 transactions in seconds Maybe it's a public blockchain. You can do your gas management nonce management It's going to be drip feeding them even slower than that onto a Onto a public chain Maybe it's going to take minutes or hours for each one of these transactions to get submitted And it's going to have to be reformed five ten times with different gas prices to get it on to the blockchain So every transaction gets assigned a unique id in the firefly Um, parlance. This is actually the namespace Followed by the operation id inside of firefly They get combined together at the blockchain layer because one blockchain connector can be used for multiple firefly namespaces So they get munched together inside of the idea of the transaction And you've got the creation time and the updation time and then there's a life cycle to this which is over time It's going to progress towards done Hopefully it might end up with fail if you have a transaction that gets submitted onto a blockchain successfully But the blockchain rejects it for some reason such as it was an invalid blockchain transaction but um idea is that this is progressing everything at scale to succeed it um You've got the form details the two details the the the nonce that's been assigned these things actually change as the engine is Updating this it's a signing belongs based on all of the transactions that are coming into this They're they're being pushed into the the system this data structure gets stored Now so that's what this data structure is it's got a whole bunch of sections inside that And then level db what was happening with this whole big data structure with everything inside of that Was getting stored in one go as a document inside of level db. It's just a key value slot as a document database um, so one of the big changes that's happened with the movement to Having the option of post-press which is a relational database as well Is that all of the sub structures inside of here? Such as once it's made it to succeed in it's done. We're going to have a receipt from the blockchain It's going to have a transaction hash. It's going to have all the extra information that's part of the receipt These sub structures are now separately stored in separate database tables and what that means is that it's much more efficient when you're working with a Relational database to have small database tables with not with small numbers of columns that you're You're only updating those smaller smaller records each each time So splitting this out was quite a big piece of work You'll actually find it was quite a it was like a 10 pound line Co change in the end to the connector To make this change happen the restructure happened inside of the connector to separate these out And it still works the same like a level db So it's still one big document in level db The internally the codes being restructured so that all of these can be separate and for post-press the receipts separate There's also then these history entries and these are really really really useful If you imagine a transaction going on this journey from I'd like this transaction to come in I've told an api call You know that's happening at scale off chain Web to two speeds and yes, I've got this transaction I'm guiding it through to the job blockchain I might be making tens or hundreds of api calls to a blockchain to get this transaction on So there are history records that get built up for the transaction with level db We have to sort of compact these and munch them together to make sure the single document didn't grow too large In post-press each one of these history records and actually different policy engine implementations The open source one and companies like pilot do do do add extra features on top Every one of those records now is a separate record it can be searched it can be filtered inside of the post-press Format so that's a really big step forwards when you're diagnosing problems that you're able to look back to this history It's not limited now thousands of entries inside of it You can look for an individual transaction exactly how many times have I submitted it What transaction hashes if they get allocated as I was And no increasing the gas costs and we submit again So that's a really big benefit of the of the change And then there's another data structure which isn't included in this which is about confirmations If you're working with public blockchains Then as transactions you submit them the first thing that happens is they get formed into a block But that block may not be the final block in a public blockchain Which is actually where this transaction ends up because blockchain's been fork So so the way that you get finality in a chain like public main meta theory It is different if you're running a best-use chain between With you know up to 15 validators where they're doing a three-phase connect between those validators once it's in a block in a better qvf t chain it's final straight away that in a public chain Finality is actually about risk over time. How many blocks Have happened after the block that the transaction went into that confirm that it's really there And that's configurable in the in the blockchain connects goes on the fftm toolkit So each one of these confirmations with with the level d implementation before the we factor You only got to see the confirmations at the end If you're setting this to a number like 50 or so which is quite common for public main meta Like a theory and it means you don't get very much updates as things are happening So with this change and this did get back ported to level db as well The confirmations which were another sub structure What blocks is it in what blocks came after it the pain of blocks that happened after this They're also another struct sub structure with a sub API you can query them So quite a big change that goes beyond just It's it's postgres now. So high availability tech you can fail over Actually really big steps forwards. We've got a whole bunch of additional performance testing on the blockchain connected to itself It is able to quite easily outpace the blockchain themselves So it's about efficiency And latency and we've got a lot of improvements there And but you know tps is really still limited by the block changes about efficiency Um, um, but So a lot of a lot of that sort of re-engineering happened inside And then some really nice quality of light features if you do make the move to postgres rich query these sub structures The ability to search for those previous transactions over over time I'm going to just before I move on to the other performance enhancements inside of inside of firefly Um, I know a lot of what I've given has really been just an overview again of the way the connectors work I wanted just to point out where to go and learn a little bit more about this So The the main PR for this and there were some follow-ups including a migration tool Can't forget the migration tool very helpful If you have a level db set of transactions inside of them and some of them are pending Or you want to keep that transaction history? There is a migration tool You can use to migrate all of that data to postgres. So that's that's a useful one um The the main PR to look at is this one the persistence hard enhancements including any postgres Well, you can see this was actually quite a long running branch Had quite a lot of a lot of input along the longest journey This did stay out for a few weeks before it went into the main line And it's quite a chunky one There are a few changes to the interface for plugging in policy engines So this is going to be a 1.3 Release when when it finally turns into a release at the blockchain connector level There's already a release at the far far by transaction manager level, which is a toolkit on what you build um connectors So there's going to be a new release of evm connects Very soon Pulling this in which is going to be a 1.3 release Which is ahead of where core is core is still in main line ahead of the first release candidate for 1.3 But because the connector toolkit has some breaking changes, we're going to go to 1.3 there um So have a look at this if you're interested in what those breaking changes are And then just the last thing i'm going to mention because it's a segue into the next The next get on the formancy core and then i'll promise i'll pause for questions They're mostly not some information here um the The code base of far far transaction manager now If you have a look inside of the internal persistence layer You'll see that there's the persistence interface inside of here and on the persistence interface there's a split um out between the core Implementation that every persistence implementation including level we must support And then there's the rich query Interface which only rich query capable In implementation support There's a switch As to whether you support that's on your plugin um at the persistence layer So if you start up your trap your connector With persistence to support rich query We're going to different set up the apis will have a complete or have all the nice rich query capabilities that you're used to on core You can you know do this contains this string or Search by these five different fields in combination sort of send the descending like that full query syntax is only supported We've got rich query first thing to mention And then the second thing is we go into the postgres branch You'll see one of the biggest heavy lifting pieces here is um Is in this code base there's a Due and a pool of workers that do the right tank transactions Now this is one of the things that we've learned on the performance journey with barflight in the 1.0 And one that one time frame was if you're working with a relational database Really well understood pattern is if you do huge numbers and concurrent commits against the database You thrash the database so it's much better to have a pool of workers that do batch optimized inserts So that practice is applied here um And the reason why that's a good segue is because that's also what a lot of the practice has been on the next couple of revs um of performance on core So we'll come back to that in a second Last thing around then for me before we get to just questions here on this item is um to say At the level of the individual The individual objects which are stored inside of postgres For those of you who are active as maintainers in the far like code base I think you might be really interested in a piece of work that happened a couple of months ago now in far like common Barflight common, which is the toolkit for building microservices in go in the ecosystem of barflight So we use the micro service framework mark. There are teams Um, including ones that I work with uh that are using this for other micro services as well Um, it used to be a lot of boilerplate code to work with databases There's now using the new go 1.18 and later generics interface. There's something called a crud base class And the only code needed to implement a complete create retrieve update delete with filter egg This is the sum total of the code that's needed for a collection inside of inside of Building a new new interface So if you're if you're building anything in the ecosystem and you're looking to build West API's on top of the database and look at this stuff because it does massively reduce the amount of boilerplate Okay, lots of information from me. Um, before I just move on to some would be a shorter topic The performance enhancements on core Any questions on where we are with the blockchain connector the 1.3 rebase and post-press Maybe uh, it's not that related but just curious if uh distributed tracing Was included or not That's a great question. So, um Yes in that, um The core firefly open source Package contains the tools for distributed tracing Niko implemented those I believe they're in 1.2.1 or they only in the world of the main line They've they've been in several releases now. So super. So so if you're using the 1.2 release What you will find is the first microservice that you come into in the firefly ecosystem Will allocate to the west API call will allocate a A unique request ID And that request ID will be passed using a particular header to all of the other microservices including the blockchain connector And included in all of the logs of that blockchain connector And even more than that if you want to The external West API Of firefly will allow you to pass that header in So if you've got like an application stack sitting on top of the firefly You can generate an API request ID in your application Pass that into firefly and that ID will go all the way through whatever the rest API chain is until that one West API request Is completed So that's the foundations of a distributed tracing. How you integrate The firefly deployment that you're doing with your logging infrastructure to maintain that I obviously know how we do it in collido for our customers and how we do it for our for our sass and our enterprise offerings including the software offering but But the open source community isn't isn't opinionated about how you do that. We just provide that request ID through There's also another field. So for instance, if you Need to send a if there's another piece of software that's setting an additional header that you want to pass through To all of the other microservices. There is a configuration option to enable that functionality as well and list a set of headers Custom headers that you want passed through to all the other microservices with a request I just Do you want to just mention that no no lapping feature as well? Just sorry there. I'm mentioning a few things in once So we've got request IDs for an individual rest API call That sometimes your business transaction is actually all the way to the ends of getting it onto the blockchain Which is an asynchronous activity that has to be traced through And you might have reliability retry that you need to do because it's a rest API Rest API's can can break when I'm submitting it get a time out or whatever. You don't know where easy this is. I'm not I want you just to remind everybody about the feature of Firefly, which is ident potency keys And just point out there's a new doc section in head Which really talks through the end to end of how ident potency keys work And they're not the same as the request ID I mentioned earlier which goes to one mess API call They stand across the whole of the business Transaction getting that all the way onto the blockchain Yeah, got it. Thank you. It was helpful Still there is one additional on that topic Basically as you're batching Like for example set of transactions into a single invocation Like to reduce the the amount of calls And between services like that distributed tracing them I mean if it's only based on that like restful Modification then Is there any way to achieve them? The tracing multiple requests. Yeah, if they're batched Yes, so the the um the Okay, the um Feature that you're talking about the batching is specific to if you're using another feature of fire Fly which is the ability to do off-chain data transfer Attached to blockchain transactions. It's just one feature of firefly so it's one point that very long else listening that batching capability Is specific to if you're doing off-chain data transfer across the pipes for firefly and in that case You can have a pinning transaction that's actually pinning a hundred off-chain transfers very efficient In that case the unique ID of each of your payloads is the message you you ID The message you you ID is the one that should be tracked all the way through So, um, it's probably a detailed conversation to go into exactly the detail there But when I'm diagnosing an issue which is related to where's my message? Um The message ID is usually the one that I use in order to trace because the message ID The great thing is that will go all the way through to the other side as well So if you've got two parties involved with two copies of firefly One on the sending side and one on on the receiving side or maybe there are five receivers in the privacy group You should you'll see the same you ID for that message appearing the logs on all of those all of those parties Got it. Okay. Thank you very much It's a great question Any any more on just we did start with blockchain connectors So I guess sort of back to the blockchain connectors any more on blockchain connectors or the postgres? um migration An option of a migration that's coming in 1.3 Yeah, I have a question. Uh, hi Peter After rolling out this feature, uh, will we have This option as an optional. I mean we can configure in this in connector files If we want to use level dp or we will use postgres. Yeah Yes, so that is available in mainline Today, if you're interested in experimenting with the feature, it's in mainline. There isn't yet a developer um flag on the developer cli to say My development environment on my laptop. I'd like to stand it up Using the far fly cli with postgres for the connector That's an option that we would like to get in in the world of three releases to be able to create a development environment with with a file with postgres Um, it's also not the case that the health charts which are the basis where if you're not using You know a company like collider with an enterprise software offering around this You're you're sort of building a deployment yourself. Um, based on the open source Uh, kubernetes infrastructure. There's no example in there as to how you set the configuration So it's there in the documentation also generated from the config Um, uh, how you configure it. It's you can do it today, but the the accelerators to make it really easy Um, are not are not there yet, but we do expect those Closed out as part of the one dot three release Mm-hmm. Thank you And you also mentioned the breaking changes and after rolling out this feature uh Let's imagine that for example, I implement the uh, another connector that uses existing transaction transaction manager. Yeah and should I implement the new apis that you Showed us or Uh, it it will be backward compatible um, so if you've built, um, what's called a policy engine plugin for the far cry transaction manager toolkits um, the policy engine plugin has an interface to the rest of the toolkits That includes the transaction history um updating um, and That apis not changed really significantly, but there were a couple of changes that we just couldn't avoid So there will be a little bit of code change needed in your policy engine if you've implemented one If you haven't implemented a policy engine, you've just implemented the far fly c api connector api Connected to the back end blockchain using the simple policy engine The simple policy engine that comes in the open source has has been updated some new apis So you'll you'll have the new simple policy engine and that's automatically being updated as part of the As part of the work here and it's not a big change. You don't have to implement something that you It's just a couple of changes really quite small ones to that It's called the toolkit api that gets passed into the policy engine It's just that a few subtle changes on there I don't think it will be a really big piece of work. Um, but it is you is required To be able to recompile with the one lot three code base. You won't be able to compile your existing policy engine against the um, the new toolkits Without making those few code changes Sounds good. Thank you A quick time check No, great great questions though, and I really appreciate the uh, the just the the conversational aspect of it So, let me just spend maybe just one or two minutes I'm going to spend a lot of time just talking about a couple of the big changes that have gone into the guts of by core So they're very similar ilk They were very similar Ilts of things that we did in the world of our own one dot one Performance optimization runs. So there's nothing revolutionary here, but these were actually well Understoods the bits of work that haven't happened yet. We wanted to get them ticked off and we also get them ticked off um ahead of doing the active active heavy lifting work which is Um, sort of started in the one dot three timeline, but it's probably the biggest piece of the one dot three release That's not not closed out yet um So the first is when you're doing just you're not using the messaging capabilities at all Or you're using messaging with custom smart contracts You're just trying to submit transactions at volume. You know, you're trying to get your Hyperlegic fabric chain to submit 500 transactions a second or you're trying to get, you know, reliably 100 tps out of your your aetherium blockchain um And you just and you care about each individual Smart contracts in bulk You're not using any of the other parts of pictures at far fly You're just generating west api signing and submitting at scale We found that we were just starting to hit against a couple of bottlenecks one is on the way in The way that we were inserting into the database each transaction as it comes in with the idle potency so that That is what this one is 1354 was a recently chunky piece of internal doesn't change any function just An optimization exercise on how we're using the database a new call of writers that's about writing transactions and operations and for the part of the those invokes and we found that made a really big step parts most of the theory, but if you are if you are dealing with something which is um Sort of more of a dlt layer than than than you know, one of these presenting parts on them Um A blockchain is written into if it's something more like Integrating to a power core cord or whatever and you're you're you're looking to sort of pump really high volumes in this is a really Really good one for you And then the other one was doing exactly the same on the way back When we're when we're taking a dense and this actually does affect a ferium Because you can listen to lots of events from a ferium Firefly has to obviously deliver those events to you in sequence Which means that there's actually Unlike a lot of things you can solve with parallel scale For events you have to be very careful on ordering. So we actually did a lot of co-op optimization to To insert into the database in batches So you can have the connectors Are very efficient receiving batches from the blockchain So receiving batches of events doing a checkpoint recovery of those events from the blockchain Then pass those batches to core as a batch Maybe screw the technical connector in the middle. Maybe not And then to make sure that core was processing them as a batch And that batch processing was less efficient than it should have been So this actually made quite a big difference if you're different even with the theory if you're dealing with model event streams You can actually quite quickly get to a really high volume of events that you want to index from a blockchain So this is this I think is um, going to be really helpful for everyone When when this is there in the sort of mainstream for everybody with 1.3 And that really is it from me sorry for talking about that today. Thanks Peter. I appreciate the I know there was a lot to Like I said Little over 10,000 lines of code to talk about there. So, um, yeah, we've got just a couple minutes left I will open it up at this point for any other questions or topics that people may want to chat about regarding related to firefly The floor is open at this time Actually, I have one problem, which I am currently resolving So I would like to maybe someone can help with that Basically, we stood up a staging environment with chain code as a service and Basically, that's the fabric Chain there and we are using fab connect and basically We are using the firefly message bus for the event bus forever microservices basically The problem is actually that events are transmitted really slow slowly through that message bus It takes like Maybe half a minute for a minute even to be consumed by a listener So like Maybe you can give some direction How it's better to troubleshoot that problem I think I saw a message on discord That sounds very similar to this at the beginning. That's actually the message that I also sent there Okay. Okay. Cool. Um, yeah, I'd be happy to look at this on discord Just a quick glance at the logs that you posted looks like that the fabric network itself is just very slow But I'll be happy to dig into it. Maybe offline there in this Certainly just mentioned for fabric A few of us were actually doing some pushing hard on fabric networks recently Um Yeah, I can't tell all the members of the community but we're doing it But anyway, there was there was a really interesting collaborative exercise going on that happened to be related to fabric and we did find an important bug in core To do with if there were hundreds or thousands of transactions inside of one block in fabric Um, there was a clash of keys that was that was happening Um, so if you're doing testing with fabric, I think it's very important that you have that fix I don't believe that's in a 1.2.x release string even the 1.2.1 that we just released I don't think it is so it might be quite important Maybe either Nico or Jim can help just join the dots on that. Um on on that issue Yeah, maybe you're mentioning a unique constraint failing. Uh, this protocol ID listener ID and something else actually that's uh I've met previously and I actually I was just Made of a crown by removing that unique constraint For time being not sure if that's the same thing Hey, uh, this is Jim. Uh, all Sorry for missing your earlier ping on discord. I'll take a look and see if there are other things at play here Uh, our experience is um, when you push the order really hard Um events can be delayed, but it's usually by a couple seconds. Uh when You order is very busy cutting blocks, but Seeing them getting delayed or half a minute. That's um, that's quite strange. Uh, we'll help you take a look Okay, thank you very much. Yeah, actually, that's I mean, we are not putting much load That's just we initially stood it up and I am testing basically that's like 10 events that are being pushed at My almost advanced Okay, thanks, sir. He uh, we'll we'll definitely take a look at that and get back to you on discord Yeah, thank you. I appreciate it. Yep Alrighty, we are at the top of the hour Um Thank you all for coming out today. Thank you for the great conversation great questions and for the content that was shown here today If you guys if anybody has further questions or other things that you want to chat about Please feel free to jump into the hyper ledger discord in the firefly channel and we'd be happy to catch up with you there Until next time. Thanks everybody. I hope you have a great rest of your day Thank you. Bye all