 Right. Thank you, right. Hello, everyone. This is the weekly TSC call. It's a public call. Everybody is welcome to join and participate. However, there are two pieces of information you need to be aware of and take into account. The first one is currently displayed. If you're online, it's the antitrust policy notice. Everybody should be aware of what it says and comply. The other piece is the code of conduct, which essentially says you must behave like a decent human being. So we have a fairly short agenda today. So let's get started. There were actually two, officially two quarterly report due, but one was already submitted last week. So the last one came in and is the Hyperledger Grid. And I want to thank Andrea for posting this. She sent an email to the list as well to let everybody know. I didn't see any questions or comments coming on the weekly page. I don't know if there are any. This is your chance to ask if you have anything that you want to bring up. I guess. Can anybody say something? I'm Duncan Johnston, an Observer, not a TSC member. Sure. If you just go back to the report, it mentioned something called Splinter. So Splinter is another open source project. Now, if you go up, there's an actual link to it. Just to make you aware, if you're not, that this is something that's out with, as we say in Scotland, the Hyperledger Foundation. It's a separate Apache 2.0 open source project. And we're not clearer at this point where it's going to land. So it's creating potentially a dependency on something that's not governed by anybody or anything other than Cargill. Yeah, but the license at least is Apache, so it's compatible. It shouldn't create any problem from one of you. Yeah, but we're all aware of the point of having the Hyperledger Foundation in the first place. It's not to have things that are not properly governed, meaning safe from harm out of harm's way being within the Hyperledger Foundation or something like it. That's all. Okay. So I have to admit not to be very familiar with Splinter. I'm not sure exactly, you know, I did, you know, get my attention when I was going through the report. I suspect others have too. And, you know, it's so I don't know how much we want to get into the details of this now but it sounded to me like this was a good exercise to make it, you know, based on something else than just so too, but maybe I'm wrong. I don't know. Yeah, I mean, I think this is fine right a lot of people work to build their stuff to work on quarter as well as fabric or something like that. Yeah, but there's a difference to build it to work on quarter versus include components from outside Hyperledger, right. So this is James Berry and I'm not a TSE member but there are multiple projects now depending on Splinter. Sawtooth is one grid is one and I believe transactive one with the goal that the group that bitwise it's doing this is that the networking pieces of all three of those end up in Splinter. So I don't know what that does from a security standpoint depending on another project to pull in. So I don't read this as a dependency on Splinter. This looks like grid runs on top of Splinter in the way that heart made an illusion to people making software that runs on top of Cordo or another platform. I think you're right in this particular example, Dan, but obviously there is some conversations happening elsewhere around Splinter being incorporated or Sawtooth 2.0, whatever that is, being a Splinter service which is a whole different ballgame. So it's worth flagging now. You're the TSE you can go figure what to do with that as and when or if it emerges as a possible issue. Alright, well thanks, Duncan, for bringing that to our attention. I didn't realize there was a strong dependency. It was more along the lines of what Dan just described, but if Sawtooth does something different, maybe that's a concern. I don't know. We'll have to follow up. Yeah, it's not definite at this point, but it's been a subject of discussion on the rocket chat for a little while now. Alright, well thank you again for bringing that up. And again, I mean this kind of input we really appreciate and, you know, whether you're a TSE member or not, it doesn't diminish your input when it's stuff like this so please feel welcome to contribute. Thank you. Yeah, maybe we tie this into the long term discussions, because I'm sure there'll be other cases down the road. Sorry, what did you say, Mark? We tie this into the long term strategy discussions coming up because we'll have issues like this down the road, I'm sure. You know, what guidelines would we have for bringing in things from outside? Yeah, we understand enough the ramification of that one in particular yet, but I take it from you that yeah, this is probably something we'll have to look into closer and could indeed come up in the broader discussion on framing. Alright, so with that being said, is there any other comments? See, it is useful to have these reports and gonna go through them. I just learned something quite important. Alright, if not, I think we can move on. So maybe before, and I'm going to insert, there was for those of you who didn't, you would join and overheard some discussion between Dave and Chris, maybe we can resume that very quickly, Dave. They were talking about, you were talking about the security reporting policy, and if you couldn't send that data to where we are and what's going on, that would be welcome. Yeah, yeah. So quickly, we have a security bug reporting policy that is called responsible disclosure, the way we handle is responsible disclosure. That's an internet standard for the right way to do things. As a result, we have to have facilities for taking in security bug reports confidentially, keeping them confidential as we work with the teams to resolve them. And we also tend to include the person who reported the issue in the discussion, as well as any engineers that would be directly involved in fixing it. And then at the end, when we judge its severity, we decide whether we do a CVE, which is a formal disclosure of the security bug, or if we just do notices in release notes, right. And Chris brought up the idea that GitHub has matured greatly over the last year in how it handles GitHub issues, including a new feature about security policies, which actually is really neat in the sense that it allows us to create private processes, it allows us to pull in individuals who aren't normally part of the security team, but are essential to fixing a security bug, and it allows us to collaborate more easily on our response to any particular security issue. The initial response to Chris was, I'm worried that security bugs would come in through the public GitHub issues, and that maintainers and myself would have to sit on those and watch them constantly to make sure there's no security lands there, right, and if they do move them over, security issues immediately. But then I thought that was silly. We can set, we already have security policies, you know, security.md and all our files, thanks, right, to lead on that. And we can adjust that to make it very clear that if you have a bug that you think is security related, just report it through the security mailing list and we'll take it from there. We already accept that as a good enough solution for security bug reporting. So I'm just announcing that I have no reservations about moving over to GitHub to handle this. And in many ways, we've had to make hacks to JIRA to support the way we want to handle security bugs, but GitHub does this natively, the way we want it to work. So all my reservations are gone and I'll just leave it up to the TSC to discuss it. If you guys have any questions or would like to see us move to GitHub, I'm all ears. I don't have any specific policy roadblocks anymore. All right. Thank you, Dave. So I don't really want us to dive into the discussion now, but I appreciate the update. I think it would be best if we could update the related issue we have in the decision log, which is listed down there in the agenda and the backlog with an actual proposal. If you and Chris can come up with a proposal, we can then put it before the TSC for discussion if necessary and then decision. Okay, I will do that. Thanks for the direction. Thank you. Yeah, I just posted in the chat, Arnaud, the link to the GitHub, you know, the new capabilities that they have in GitHub to manage a security advisory, you know, creating branches, all the things that Dave was talking about is there. And, you know, again, what we're, you know, I'm happy to work with Dave on sort of refining the issue and putting the formal proposal together. But basically, we're saying keep things the same in terms of the front end, how you report a security bug. But then how the projects deal with that, such that they can actually issue a formal security advisory potentially as a CVE. I think we have a better process if we follow what GitHub has given us. So that's the essence of what we suggest. Well, what I was suggesting with Dave. Yeah, and my news is I'm a plus one on that. Yeah, so. And more broadly, I would ask, does this mean that we transition from GR to GitHub issues? Is this part of that? But that's for later discussion. Yeah, and it's a good one, right. And we should probably have that as a separate discussion. But this is more really about, so even if we create a private security issue in Jira, we still don't have a way of creating a private branch, aside from doing it, you know, secretly in somebody's personal GitHub. So that we can actually collaborate on a fix, do some testing and so forth with nobody seeing it, right. This new policy that they have enables you to do that. And you don't have to pay the extra fee of having a, you know, normally if you have a private repo, you have to pay GitHub. But yeah, this bill allows you to have a non private repo. And so you're getting a free organization and yet you have the ability to create a private branch to deal with security issues. So that I think is probably the most important thing. And then actually being able to issue a formal security advisory that can flow into the whole CBC and I think would be a positive benefit for all of the things we do. Agreed. This is something that the Hyperledger org was opted into the beta for, like when it started, we tried it out. I was impressed with the entire way that it worked. The only difficulty was this front end piece that we just discussed. So it's already enabled across all hyperledger. You just have to the products need to go in and say do the thing. So it's already there. We just need to use it. Yep. And all I see is we probably just have a training piece, you know, an intro video on how to spot a security bug and how to properly report it. All right. Sounds like a good direction to follow. So let's again, I mean, try to formalize the proposal so we can put it before the TSE and hopefully we can have a quick decision on this. So let's move on. Now we're resuming the agenda as it was first put forward. We're back to the long term agenda framing issue that we started discussing in full last week. And then suggested that we hear from James Barry and William Katzak, who gave a presentation at the Global Forum two weeks ago and touched on some of this general topic. And they thought it would be useful to go over this with the TSE as background information for us to chew on. I don't know. I saw both of you, James and William, you're on. We are. So what we, I've been talking about this particular topic for a while and I want to make this non blockchains or single chain specific but I've been talking about the need to really have a lot of inter usable components. And that's what we talked about and Dan thought it'd be good for us to show it. So I guess are you going to click through this and I can do it or you can do it. I can give you remote control if you wish, or I can click. Sure, you can go ahead and just I'll just say click on there. Or I can, why don't I do remote control and I'll just run it online. Do you want to present or sure. Let's see. Might be easier that way. I sent you a remote control request as well. Okay. There you go. And did it actually. Okay. So what we've been, I've been talking about this for a long time and just for a little background on tacky on just so you know who we are we're a four person shop. We have grants from the National Science Foundation Department of Energy and US Air Force. We've been building blockchain applications built around security. We haven't really come out with exactly what we're doing. We've been using sawtooth as a foundation for that and we don't have commercial products in the market. We've been making some several contributions into sawtooth. More recently, we started off on several other block chains, including fabric and ended up on sawtooth roughly a year ago. And over the course of the year, we've been looking at it as a piece of how we want to take it forward as pieces. And basically we're just going to talk today very quickly about is it modelistic programs. Or you have pieces that you can adapt by putting them together to do the workload. And what would a hyper ledger stack look like today and tomorrow we did this specifically for hyper ledger and how tacky on is particularly using different projects to build our application as as that. And what's missing from our perspective from hyper ledger. And so a couple of things have come out as the meaning blockchain sometimes is meaningless because of the way that a lot of the distributed ledgers are going. And is it really just a design pattern, a couple of things that have come out that I think are interesting in this perspective. One of the block chains that originally came out were very monolithic needed to be blown up. And really in our opinion need to be put into parts that fit into the workload that you're trying to achieve. And when I look back on this is, you know, standard software is an interface logic data interface and, you know, a black chain database. We need to be decomposed micro services very much micro services like components. It's our opinion that block chains will be assembled from the best to breed parts. And I go way back I've been doing this for almost 40 years. And I look back on where Netscape a well prodigy where all the, the, you had to be on one of those services you had to use her email and they were very monolithic. The internet came along with the HTTP and pieces started breaking apart and you ended up with a patchy. And just for disclosure, I was at IBM at the time and they started a project called WebSphere. It was my first project that I ran there and I also ran the open source pieces where I met Brian on writing a patchy to a license among other things with IBM lawyer. And as we were doing that, if you look at Apache from 1997 versus Apache now it's it's a ton of different pieces that you put together to build an application. And it's our feeling that you may be able to configure by workload. You know, if you're on the way on the far right, you're running a public network the cryptocurrency coins, you have high TPS irrevocable transactions. If you go to the left hand corner of this private company, you have a stringent consensus but very low TPS and a variety in between. So you can switch consensus is obviously a fabric and sawtooth both allow you to switch consensus depending on your needs. But they use different constructs to do that and you can't use those mechanisms. One of the things we're going to talk through is setting it by workload. So, and I kind of did a what about a hyper ledger stack. You know, you've got core blockchains you've got, you've got pieces starting to break out with transact with a smart contract interface. You've got a pluggable consensus breaking out pluggable encryption kind of under Ursa, the off chain T E compute, etc. But are they really something that you can take from, say fabric to sawtooth I wrote to Bessie for say pluggable consensus. Even if I go out of hyper ledger and I go to the Ethereum, Ethereum for business or enterprise Ethereum whenever it's called these days is pluggable consensus fabric has a pluggable consensus sawtooth has a pluggable consensus, but they all use different constructs and ways to get there. We think that long term you need to have something and just concentrating on consensus right now of having some sort of construct that would allow you to use a independent consensus mechanism and plug it into any chain. We see this as something that needs to be built out further and these applications that are certain emerge like transact Ursa. I don't have grid on there. There's grid can be built with components as opposed to built with on top of a particular chain from the beginning. And so we started looking at what components are needed, taking the hyper ledger view of it and saying what else could be built out starting with block storage. Is there a reason that everybody uses a different way to do block storage within a blockchain network and connection management layer. I mentioned that seems to be on the roadmap for sawtooth to vis-a-vis splinter appears to be where that's being developed out today. You know, is there an operational dashboard and it works across applications. If you think of the way you do UI today you don't have a different UI for each application you use you have different constructs that allow you to overlay and AI on there. Data exporting and importing are really something that should be standardized you you're starting that with say quilt with the inner ledger translation but is it really pulling it out data exporting. So these are some ideas that we had that you could take to the next level. Make it so that say fabric sawtooth all the other core blockchains could all be using that as calling that as a service and within the hyper ledger family of services and people outside of hyper ledger were done there. And, you know, I'm going to turn it over to Bill and let him kind of talk through how we're using these available libraries and why we think it's important. Bill. Thanks James and thanks to everyone on the TSC for for having us here I met some of you at the global forum. Appreciate everyone who came to our talk and spoke to us about this. So, we want to talk here a little bit is I want to start by saying and explaining a little bit of how we we use sawtooth in the available projects that are already that are already around. So sawtooth forms the base of what we're calling our tacky on core platform. We're specifically not talking very much about it yet because of because of some of our sponsorship. But at the core of it, you know, we're using sawtooth for for all of our core apps. As you'd expect each core app has its own transaction process or transaction family. So nice feature sawtooth that that we're able to leverage quite a bit is the transaction batches. These allow the state to be manipulated across families automatically. So if, even if one transaction family has to interact with another because of some interplay in business logic, we can guarantee that these things are atomic because of sawtooth feature is really nice. We find the signing in the permission facility is very powerful. It lets us have for entity access control and with cryptographic guarantees and it's just sort of works well as it is. And the pluggable consensus is also very nice. You can we found that being able to use different consensus for different use cases is a is a really powerful idea and in fact we're we're doing some work on a different type of consensus right now under under a government grant. It's going to go into subject. Go ahead next James. So, can you go to the next slide. Yeah, so, so not knocking anything about talk to talk to this excellent and we're going to we want to talk about how we could possibly make it better. Okay, so the next part that we that we use is transact so transaction rust with the API. So part of our using this as we've been working on goports, especially the data structures in the interface. Although it's not fully integrated in sawtooth. It's close enough that some of the quite a bit of the submission and management code is, is, is possible to share. And we're using that across our apps whenever we can, especially with our new goports. So the fact that transact is is is has been factored out is letting us ensure this clean standardized code for transaction handling. Just as an aside, and as a plug of something that we've been doing, shamelessly, we are actually working on how to integrate our application client SDK into transact so we have a prototype SDK for building client libraries so the idea is there's a lot of boilerplate code you have to write when you're writing on sawtooth or a transact client, all the transaction signing the submission, all of that stuff is management of the payload design ends up being very, very boilerplate. So we have an SDK that sort of gives you an API builder interface so you can you use the SDK to build your client and then your application can then interact with the blockchain just using native function calls without ever having dealt with signing and things like that and transaction submission, following up to make sure transactions commit. We actually have a working prototype of that for sawtooth at that GitHub and that's actually the core we're using we actually using that we're eating our own dog food and that we're using that in all of our are all of our clients for all of our different components. Okay, go to the next one. We're using Ursa. We really like that the Ursa is sort of not doing anything new or, you know, revolutionary it's just giving a really nice consistent, reliable interface for it. And one of the reasons we chose Ursa was that some of our clients like the Department of Defense have very specific and sort of sometimes obscure cryptography requirements. So we found that there's a real need for very carefully abstracting all crypto at sort of a very clearly delineated point so that we could separate them and swap them out if if if necessary. So what we've been doing is we've actually been using Ursa and Ursa interface as the as the line at which to do this abstraction rather than trying to do anything. We'll get home to go to the go to the next slide. So that that's just a little bit of a discussion of what we have been using in the sawtooth stack and we really like it otherwise we wouldn't be here. But that's not to say that there, we don't think that there's there's work to be done. And some of that we're going to talk now about some of the things that James alluded to them but I'm going to say a little bit more detail. Some of the things that we think would be really great in the future to sort of realize this this vision of a sort of a fully pluggable reorganizable stack. The first thing is really pluggable consensus. So, such as in fabric both have pluggable consensus. They're not compatible with each other. In some cases there's the same algorithm implemented for both of them. We can't actually use them with each other. So we believe that I understand that this is there are technical hurdles to be overcome and there needs to be some common interface agreed upon. But we believe that hyperlever hyperlegion needs to have a well defined standard and interface for pluggable consensus to as much as possible. And that's from the reason we're talking about this is that I think everybody here has at least has an idea that implementing a robust and correct version of a consensus algorithm is extremely hard. There's a lot of testing there's a lot of quarter cases there's a lot of validation to be done. Today this has to be done again for every platform that you want to be want to port that to, and then you have to maintain different versions for every platform when you find security issues when you find functionality bugs. So imagine that we could use the same well tested and debug code across all of the platforms, then we will only have to validate the interface. I think this this would be a huge leap forward if it could be realized. So the games go to the next slide. The second, and I think this is actually starting to be done, maybe maybe we actually did make the right call in in in trying to propose this is that it would be nice to have a common networking and connection management library or component. So what I mean by that is, for example, such as uses zero MQ or the MQ socket, excellent library very nicely done, but it's still too low level. What we what we think should be there is some kind of component that provides the common communication schemes or patterns. So peering in discovery, a reliable and tested gossip protocol reliable and gossip is two different words I shouldn't mix them, but a test and invalidated gossip protocol. And a reliable set of connection management routines. So a lot of these systems need to really do the same thing. They pick a communication pattern they pick a sort of a technique for distributing data, and then they just implement that pattern. It would be great if we had a library that sort of implemented all of these common pattern. It would be extensible so that you need to extend something or modify something you can for your specific application, but the core of this would be tested integrated and pretty much ready to go for whichever component you want to be or whichever type of system you want to build. And go ahead James. And the next thing that was decided something that we we've just started working on internally we're not sure if our internal version is going to be something that will be of interest to the community but currently part of the assumption with blockchain is that there's a replica of every block on every node. This has been sort of the, the one of the key or central assumptions of a distributed ledger up until this point. So what we're finding as we're as we're talking to clients as we're talking to people who want to use blockchains is that not all use cases really need this. So we believe that block storage should be an abstraction. That is, you can have validation and consensus on a larger scale, you can have things distributed over regions distributed around the world, you can have a lot of actors, in terms of validation and consensus handling, but decouple the block storage from that so that not every actor at the validation and consensus level needs to have a copy of every block. So if this was abstracted correctly, and so when I said correctly, I mean, sort of, I want to say elegantly this was elegantly abstracted. This would allow for custom storage and replication strategy so for example if you have all if you have a system that for business reasons you maybe want 12 participants in consensus, but maybe you really only need three copies of every block. Maybe you could build something that has a custom storage and replication strategy, and this with this would also open the door to a little bit sort of wilder things like I don't know using S3 to hold your blocks, maybe something fancy like like an erasure coding that system. So the point here is that it would be nice to actually have this as a separate layer as much as possible from the validation and the consensus. So it would it would both make things more understandable between different different influence different blockchains sort of different systems, and it would also allow this flexibility in terms of client or use case needs. This last one it seems that this was so we presented this but it seems that the world has been moving and this has been getting done anyway is that key storage. So key storage there's lots of solutions to this but there there has been no hyper ledger standard it seems that at the global form I became aware that one is in the works. I forget which project it's supposed to be going into. This was really important to solve for the Department of Defense clients, and we had that we had to figure something out so we use something off the shelf but integrated it. So we think this could be standalone or integrated with Ursa but again I forget the name of the project that is proposing a key storage standard right now. So this may be already a move point data exporting. So, there's a few apps in particular the, the software supply chain example and a few other ones that find it useful to keep on image or a sort of a synchronization of the current blockchain state in a dbms or something else. And then this sort of facilitates complex queries and feeding that data to other systems. So to provide this, the state delta API, which is very powerful, it works well over either zero and few arrest, but something like this, we think ought to be standardized and obviously there's different data models with different chains and systems, but this this sort of the API itself sort of being closing infrastructure should be standardized so that the apps which need to receive state from a blockchain shouldn't have to reinvent the wheel and implement a different tie every time you want to connect to a different type of system. So James go ahead. Yeah, so you know, why not. The crypto community approach when you have layer to layer three side chains, state channels, etc. The problem is emerges these are different companies and you have different people controlling the data from point to point. I see this very similar to what happened on web services. When you were trying to get different companies to own the different web services and connect in and have a standard that allowed you to research find and connect it in. There's different, and David would probably be on top of this is there's different standards for security different ways of people handle data, and as it's transferred from company to company. There are issues when you talk about large entity like the Department of Defense or an energy company like Con Ed trying to use a lot of this they want to all be a single company. So if you really look at it, what people are doing is going into a patchy and then building out projects from there. And I put this slide in a little different than I had at the global forum is I see open source really dying within the blockchain community. On the upper left of all those red circles those are projects that no longer contribute to open source out of the top 100 out of the monetary top 100 block chains. Other than Ethereum and hyper ledger I don't see any other chain really doing anything other than a monolithic approach use my stuff and you and only my stuff and even out of those that are up that are having a lot of contributions. I dare people to build about half of them. I've been on multiple panels about this and less and less that people are doing open source or they're only open sourcing a piece. I see hyper ledger is being the really the only. Well that and Ethereum is the only two places that you can really develop. And there's a lot of traction going on which is why I put the hyper ledger stuff on the right. And it's really the only open source community that is not dependent on a single chain. You know, Ethereum is active but tied to a single chain. And so when I look at these additional components today hyper ledger has the house with a few components. But I see it becoming Apache with close to 100 components on the other side and able to build an application and not worry about competition of different projects doing the same thing, but eventually coalescing around it. The other thing that's missing right now that I see in the black chain community is standards with reference implementations. A lot of times early on, if you look at both say HTTP and Java, the standard implementation was the open source project that was out there. So what happened in EJB is what happened in the HTTP project. If it wasn't in Apache originally, basically you didn't weren't able to get that into the HTTP one one or whichever standard was was floating around the time. And it properly positioned I think some of these components could end up being the standards that whichever standard body emerges whether it's ITF or IEEE or whoever around this can use that as their reference implementation. And it'd be to say block storage, which is a common component that everybody needs. So that's the way we envision it long term. And this is just kind of our mantra just from where we saw it going in 2020. Like I said, standardization should be using hyperledger open source as reference implementations, more specialization in the components. As we talked about general purpose block chains kind of fade away because their parts become interoperable private chains abound, but the blockchain computational capabilities kind of fade into the overall fabric of the enterprise workflow. And public chains become validation not full storage or computation and storage of data goes off chain because you've got to lower that cost for enterprises. I think if you've been dealing in the enterprise and you try to have multiple copies by node that becomes so expensive that it slowed the adoption. Anyway, our thought was take these components create new apps to change how the work is done by workload. And that's really where the long term impact is going to be made. And just if I close out just if I look at because I've been I've been at IBM multiple times IBM itself if you go to the consulting group, they look at what blockchain do we want to use, depending on the workload and what they're trying to accomplish. So the consulting group within IBM is actually built out at least seven different chains from the people I know there. And they haven't fat, you know, focus on a single chain they focused on multiple chains. And they're on multiple boards of these blockchain companies and that's as a result of needing to have something different for different workload wouldn't be better. It was assembled by components. Then come together to build out the final application. So we want to thank you for your time on this. Hopefully this was something that can provoke a little conversation. Yeah, absolutely. Thank you, James and Bill. I mean this is very interesting. Let me let's go through a couple of rounds. First, I would like to ask if there's anybody was any like clarification question kind of things for James and Bill, we could do that first and then we can do another round where, you know, if people have reactions, we can start a discussion. So first any questions. Sure. I had a question on the block storage. Would you envision that each participant, if you if you had the abstraction layer, each participant hosting block storage could use a different type of block storage underneath correct. Each, yeah, I'm not sure that would be a policy decision right theoretically each participant could use a different type of block storage and as long as you have hashes and things like that in the blocks. It shouldn't matter. I wasn't thinking about it in terms of this perspective I was thinking that you know across your system you'd probably use the same, the same thing. What you're saying is absolutely also possible. Like, for example, if you were, if you had part of your chain on AWS and part of it, you know, in a physical data center, you know, maybe in some crazy implementation you might put blocks and in S3. I mean, that started out as a joke but we were talking about it as the as the global forum but it seems like some people thought that was actually a pretty good idea. But yeah, I could see you using a different thing, but the original thought was, you know, maybe make it more like you can envision sort of like a distributed hash table type of implementation on on top of your blockchain, whereas the node that ends up choosing the node that ends up being authoritative for storage in a certain certain block where that set of nodes might be chosen by a by a hash table a lot. Cassandra is anybody familiar with that. That was sort of the original inspiration for this idea, but I think making it flexible would be would be a very interesting thing to do. Thank you. Any other questions. Okay, there are no questions then I'm inviting to see members to react any comments or reactions to either a specific point that was presented or the general gist of it. Well, I think you know he touches on the point that we sort of really need to decide is do we go to more component based. You know what's our long term strategy, I think it's a good view of what life would be like if we went to, you know, more componentized. That's the right word. You know, going forward do we do we do that. You know, do we make sure when we break components out there easy for everyone to integrate so I think we need to go through and probably, you know, define some standard interfaces things like that for different levels. So that, you know, a lot of work there but it could conceivably be well worth it for the long term. We're envisioning sort of a process like that where where the interfaces are standardized and of course you have to make sure that things are going to work across programming languages and things like that. These are different, they're different languages are different sort of platforms in place across the the ecosystem here but yes it would be difficult but maybe well worth it in the long term. Yeah, I mean one point I do have to say is that, you know, you presented the Apache software foundation as this ideal place of components you can just put together. The reality is it can be quite challenging to do that just because the sheer numbers, a number of projects may makes it hard to know what's there, what is it for, and what are the pieces that actually you can use together and in the right way. But I, you know, so that's, we have heard, I have heard, you know, people complaining about the fact that in Hyperledger we already have too many pieces and it's hard for people to come in and figure out what to use for. And so it's, you know, there's a downside to this. I just want to point out it's not purely ideal either or it's not just, you know, better in all, in all aspects. Some people would love to come to Hyperledger and find a one path forward to they just say use this to this and do that. And, you know, but this is not to take anything from the proposal which I do think has some value and is definitely thought provoking. The other part that I wanted to comment on was the standard aspect. So I mean standard is loaded word, right. We can have some kind of common API is between and I'm literally avoiding the word standard, you know, within Hyperledger different projects like you were mentioning for consensus, for instance, we could say, okay, is there an API that we can all use that would be shared by all the project that would make it easier for people to use different components in different ways. Talking about standards itself is, you know, we have actually said until now the Hyperledger was not going to get into standards. And so that would be a big change if we wanted to call it that and the, you know, so just that. I would not recommend getting into standards whatsoever, but a lot of the standards bodies sometimes will look for reference implementation. And if you've got something that's pretty close on there or is working that usually takes precedence over something that is theoretical, at least in my experience over the last couple of decades. And that's generally on broader standards, but I'm looking for more of a reference implementation, something that takes into account the emerging standards and implements them properly, as opposed to proprietarily. So Arno, I wanted to riff on your comment about the Apache and Hyperledger being difficult to understand which projects should be used and those sorts of things. When I was out talking with people. There was also this other perception that Hyperledger is a base platform that all of the other projects built on top of. Which obviously, you know, really I think ties into kind of some of the conversation here right is having that those common components for which every single other ledger is built using, if you will, so, there's the two sides of the confusion that exists within in the Hyperledger space, which is one being, I don't know which project to use and the other being they're all built off of some common components. Yes, indeed. Thank you, Tracy. Can I say something? This is Angelo. So first of all, I want to say that I really appreciated the presentation because feedback from industries wants to use these components is very well appreciated. And I also am a believer that we have to go in this direction of having this legal approach that you can even though I'm a security guy and I must say that when you design distributed algorithm, you have all these wheels running in multiple places and they have to interact closely. That is very difficult. It's very difficult to get this legal approach, but something that I got during this presentation was when you stress about the plugability of the consensus algorithm. I was thinking it makes sense to have definition of ordering as a service so you want to externalize, you might want to externalize completely the ordering service and then maybe even potentially buy it from a third party. And there the API, if you go to this in such a situation, the API might be very, very simple because you need just an API to broadcast messages and an API to fetch blocks or transaction, whatever it is from the ordering service. I think we might go even for other things in this direction where very simple API are needed. You don't have to think about too much about the underlying technology that you're using. Thanks much. Very interesting presentation. All right. Thank you, Angelo. Anyone else? Daniel has his hand up. Go ahead, Dan. So one thing to keep in mind is not all blockchains have the exact same execution model. One example being the ordering service in Ethereum, it is inexplicably tied to the execution, you're not ordered to be executed in a block. So I think some of these services are great, but they need to have some flexibility when different blockchains have different assumptions about the order of operations. All right. Thank you. Well, may I add, sorry, I don't know, may I add on this, because that's very interesting. Go ahead. This point, because I think of tightening, I think it was also in this line, this point for the public chain that are mostly used for verification. So it's very expensive to have the main chain computing something. So for enterprise application, this is even worse, because if you ask the entire network to compute something, so you replicate the computation on each node, this is just a waste of resources that you don't want in enterprise application. So maybe it seems to me that the approach that you have for execution and ordering execution together for enterprise application, it's not really flying, but that might be just my personal taste. All right. Then, you know, the other thing I wanted to touch on is, you know, practically speaking, if you said, okay, why would it take to, you know, adopt an approach like this? Obviously, you know, coming up with those commands slash standards API requires, you know, people from the different projects to buy into it, right? And get together, sit down on a case by case basis for each API, you would have to have some kind of task force where people get together and say, okay, how do we do like a consensus API? What would it take? And have the project committed to say, yeah, we're going to work on this together and we will then implement it, right? So I'm touched what I'm touching on is, you know, so I feel like we can all discuss this, but then they'll still be the challenge of at the end of the day, the TSC cannot force projects to do this. We could, if we agree, this is a good direction. We could try to identify different areas where integrations or those common APIs could come in and, you know, advertise them to the project, say, hey, what do you guys think? But we don't have the power of forcing the project to do it. No, we want to try to force anything. I know that when we've looked at coming up with common consensus in the past, we find that the tendrils of the APIs end up going deeper into the stacks than some people might assume. So it can be pretty difficult to actually have sort of a firewall at that, at that API such that you could just directly consume a PBFT or a poet or something else in one app, one blockchain versus another. But, you know, it might be interesting in thinking about, you know, what is it that we can do if we can show a place on the greenhouse where there's an empty box for some sort of common consensus approach that might generate some more interest from the community. If we had maybe some sort of lab contest to say, here's a fork of fabric, here's a fork of basu, and we've modified how consensus is consumed within them and here's sort of a proof of concept about how you might go about doing that. Something like that could be interesting too. All right, thank you. And thanks Dan for bringing James and Bill. I think this was definitely worth our time. We're almost out of time and there's anybody has any other comments they want to make right now. Anyone. I'll just say otherwise because I think one of my main skills is finding people smarter than me to do the talking. All right, sounds good. But so, you know, I have a hard stop. So I'm interested in closing soon. And I think, you know, this is worth thinking about. So I'm happy to leave it at this for now and then we can follow up next week and see, you know, if there's any further reactions or thoughts that people have. We could, you know, practically speaking, what could we do next? There would be a concrete step we can take as the TSE. All right. So with that said, and then I'm going to close the call.