 Okay. All right, so the recording has started. All right, welcome to a very regular edition of the Hyperledger Technical Steering Committee. Everybody is welcome at this meeting and our other working group meetings, and likewise welcome to contribute code. And if anybody's uncertain how to act in any of these settings, you can reference our code of conduct, which is available on the Hyperledger website. Today we've got a, I think a relatively light schedule. We've got an update from Indy first off. And then after that, we've got a continuation of the discussion on the supply chain project proposal. And then a couple updates that I think Tracy has here on some scheduling things and event reminders. Tracy, if you wanna hit those. Okay, sure. So we have some upcoming meetings that we're looking to cancel. Just wanted to bring that up and make sure that everybody was okay with canceling the November 22nd meeting, which is a Thanksgiving holiday in the US as well as the December 20th and 27th for the end of the year holidays. So just wanted to check in, see if there were any objections to that before we actually cancel them from the calendar. I'm fine with those. Do we wanna cancel the one in December when the global forum has a well or do we wanna hold the meeting? Maybe we can all get in a room and switch the one. Good point. We probably don't have any space on the schedule for the meeting. Okay, I can add that to the list as well. It's at 4 p.m. local time. That would be the 13th. If I have my math correct, then December 13th, we would also cancel. Well, I guess it depends what it's up against. I mean, cause it is later in the day there. My distinguished colleague from France said it's 4 p.m. That's right. All right, well, why don't we take it off line? He's bad at it. Oh, I'm French. I'm in Belgium, but I'm French. Why don't we take it offline and then see if there's any wiggle room in the agenda and if TSC members can give maybe some indication over chat of whether they're planning on being in Basel if we've got a quorum there and we've got some time in the schedule, it'd be good to meet face to face. And if not, we'll just, we'll pop that one off the calendar as well. Sounds good. And then just the event reminders that we've had on the agenda, just the APAC hack fest coming up the week of March 4th. We're still working on finalizing those details. And then obviously the Hyperlegical Forum December 12th through the 15th in Basel. All right, well, I think with that, we can jump into the Indie update. I will drop the link into Rocket Chat. And for those unfamiliar, we do usually have a little background chat going in the TSC channel of chat.hyperlegure.org. Okay, and could somebody say again the presenter's name for Indie? It's Sam Kern with the Sovereign Foundation. Hey, Sam. You are welcome to just have people follow along with the link themselves or if you want to present, Tracy can release control and let you present. It's up to you. Yeah, I can share. I'm not going to do a dramatic reading of the entire thing but I want to pull out some highlights. Feel free to use different voices though. I am no Robin Williams, that's for sure. So the full update is here and the link's been posted but I wanted to call out a couple things. We're very happy with our progress. Not with everything in particular but with our overall progress during this period. We have the latest Indie node release rolling out this week. And that's going out now. The BC government is expected to go immediately live with their project so we're pretty excited about that. LibVCX has been contributed by Evernim. This is a verifiable credential library to make it a lot easier to build that on top of the Indie architecture. And then significant work is going into standardizing the agent communication protocol which will help with agent compatibility and all the goals that we have there. So the other thing that I really want to call out is the, well, I'll get there in a second. So yes, anyway, we've had really good involvement recently. Our code base is up to 15,000 commits with 131 unique contributors. Well represented at TPAC and the Hyperledger HackFest and IW or other events that we've been to recently. Lots of interest and good things being received there. Couple of really good things. The BC government in their Von project, the Verifiable Organizations Network has contributed code for a new sub-project under Hyperledger Indie called, that we call Hyperledger Indie Catalyst. And this is a community holder for credentials which helps solve the problem in the supply and demand equation of building supply with otherwise public records anyway. So that's a really interesting project and we're excited about the future of that. The code base being contributed by BCgov is operational and will continue to be tuned and generalized to make it more broadly applicable. This is really useful for any, particularly in government scenarios, but any situation where you have lots of credentials that you wanna make available to a wide group without having to onboard each of those group and each of the suppliers of those individually. So very cool project there and it's been really good to work with them. The other thing that we've been working on is contributing and organizing the work around the Indie Crypto Library. It's been named ERSA and excited about that and the adoption interest and one of our tasks coming up as the work on that library progresses is to now convert all of our projects to use that new shared library. And that's one of the things going on. So issues that we've been addressing and in the progress there, there's been a lot of work around our height, the Hyperledger Indie Project Enhancement documents and the process there. We're still refining that process to try and figure out ways to make it flow a little easier, but it's been sort of a central point for lots of discussions in the community which has been great. And then the coordination calls that we've had between members of different organizations that are contributing has been fantastic. The Indie agents calls particularly active as well as the Indie overlays call dealing with credentials decoration. So really good stuff there. Working on a shared roadmap and bringing members of various organizations into a unified sprint team will continue to help that. We've made a ton of progress in testing Indie Node for scalability and performance. New test tools there that have really let us test this at global scale and we've really had a lot of progress there and feel confident in the scalability that we've been able to demonstrate which is really good. Still more work to do there including some things that we've identified that we can improve. And so that has spawned a ledger 2.0 discussion to sort of round out the scope of the next iteration of that work. It consists of documentation. We haven't made as much progress here as we've hoped and there's a couple of complicated reasons why but we're not giving that one out. In particular, we're focusing on creating both getting started guides and documentation that target various audiences. We have people that are coming into the projects to see what it does and would like a demo. We have people that want to get deep into the development of different components of Indie and producing the right getting started guides and documentation for those various audiences is the current focus on what we're doing there. We have a proof of concept based off of Read the Docs to help organize and make it a little bit easier to publish that documentation there. Learning curve, we've made a lot of progress here but it's not really ready for full publishing and but we should see significant improvement for lowering onboarding very soon. Lots of work in the agent community. The thing to draw out here, there's lots of conversations and things going on which is very good. We have the beginnings of an agent test suite which will allow someone to certify their agent against this Indie agent test suite to verify that the required common levels of functionality are present and working properly. And we're gonna leverage that going forward as we reach a resolution on the protocol discussions and things that we're doing. We haven't made a ton of progress on measuring the size of the community except for the aforementioned in individual contributor counts. We need to begin gathering better analytics and understanding the involvement of the community in different aspects there. So further work that needs to be done there. As far as our build issues, one of the, we've had some challenges synchronizing versions in the various builds mostly because each of the components were building their own copies of the dependent libraries. We're migrating to a plan where each of the builds will use the most recent published library version from the other various projects, which will help synchronize that around. It won't be so dependent on the precise build time but will be locked to those released library versions. And we've had a little bit of struggle there. There's still some of that work that we'd prefer not be directly being done by Evernim that is still because of some of the difficulties there and we're looking to improve that to be able to move off of a different build and continuous infrastructure so that we can make that more consistent in what we're doing. So we've got some details on releases here as well as sort of overall activity and then current plans. There's lots of work around agents. That's a lot less mature than the indie node work is. So there's a lot of stuff that you'll hear but mostly it's because we're trying to figure some of that initial stuff out and an indie node has reached a higher level of stability of course because of that. The overall activity doesn't really represent in our report the amount of effort that's gone into each one the indie node has received an incredible amount of effort but lots of progress in the indie agent world which has been really encouraging actually. I wanna call out, we have a Linux Foundation intern Kuzma that has contributed significant contributions to our ongoing work with reference agents. And yeah, we have some more details in here including links to some of the work that BCGov has been doing and Brigham Young University as well and their ongoing work in the community. But that's my summary. If anyone has any questions I will do my best to provide a first and intelligent answer. Hey, thanks for that update, Sam. I think you guys might win the award for most detailed quarterly reports. I appreciate having all that right up in there. I should mention that the creation of this report was not all my work. I was just voted to present. So there's others inside the Southern Foundation that puts significant work towards the report. Great. I wanted to comment as well that I thought it was very thorough and you set the bar for other projects now which I guess could be a good thing or a bad thing. I did have a question. I chaired a performance and scale working group. So I know personally I would be interested in learning more about the performance side of what you were doing and scalability tests. I don't know if some of that ties back into the testnet discussions we've had in the past as well. And so rather than dive into it here I didn't know if you want to come to a performance or have someone from the team come to a performance and scale working group meeting. There are Tuesdays at 9 a.m. Eastern time. I don't know if that works for anybody or I mean, if people want to get into it here we can but I'll leave that up to Dan to decide. This is, I'm Richard Esplan. I'm a product manager at Evernim and it was my team that was doing the performance and scalability work. And we can have somebody join that. We wrote a number of low testing scripts and it took a while to get scripts that could drive the system as hard as we wanted it. And then we've tuned that. We post our sprint demos on YouTube every two weeks and the last four sprints have had some section where we've reviewed the results and how we got there. So if you're interested that's a good place and I can share that information with you if you want Mark. Thanks, that would be great. Thanks Richard. Tuesday at 9 a.m. Eastern. I can have a representative attend the next one if you'd like. Yeah, I think as we're trying to define work loads and things this would be great to learn from your experiences. And that I think leads for me to a question. You mentioned sprints just now and a little earlier you talked about having separate teams and trying to get them into the same sprint. I wonder if you could say a little bit more on that with just a little bit of more context here that I think is across different projects. I've seen projects try to maintain a single sprint across all contributors and also just have completely independent sprints from separate teams all contributing sort of asynchronously. That's a challenge for all the obvious reasons but it's been difficult in the community to sort of line up work that is dependent on other things. So when we're developing a piece and there's someone's agreed to take on a chunk of work, misalignment of schedules can sometimes delay the future, the work that depends on that by more than just a couple of days, sometimes it's weeks before one sprint finishes and the next one begins. And it draws out the development process. And so that's made it hard. And I doubt we're gonna be able to get to a perfect level of sprint mastery here. But the goal there is to, with some better community planning and goals to be able to make the dependencies a little bit more obvious and that way teams might be able to make minor adjustments if they can in order to fit sort of an overall sprint schedule work. We know that we're not gonna be able to perfectly align with all the contributors but we're hoping that better visibility into the overall plan. And so that people understand what types of work are depending on the pieces they're working on will help coordinate that a little bit better. Mostly it's a speed issue and helping to very clearly know where we are without having a whole lot of unknowns up in the air as we're trying to make plans. Most of the many of the maintainers for the SDK and Andy Noter on the Evernim team at the moment and we're trying to broaden that group so that there's more variability in the cadence but the challenge is that then we've gotta own that code and fix all the bugs to support it. So we're trying to show out the right way to do that. We've tried various models. One is to invite people who we know are implementing something specific into a sprint with us. That happens fairly rarely because people, there's not as many, other organizations aren't dedicating resources over long-term quite the same way we're hoping to change that. But the other approach we've had is accepting pull requests where if it's easy we just work it into the current sprint but if it's a large pull request provides a lot of review we need to get back to them and say we're gonna have to schedule this into the next sprint and that's been something we've struggled with. The Indie agent repository where most that new development is happening, a lot of the new emerging standards, that's actually got maintainers in both British Columbia and Brigham Young University. And it's been fun to see their collaboration together as they've tried to work to get, tried to coordinate across the organization on getting the work done. And they've done some great work but they're both sprinting separately. One's got more of a bigger team than the other and so they're running at different cadences. Everonym is discussed moving to a flow model similar to Kanban in order to not be so rigid on sprinting and be more responsive to the community. Something we're experimenting with on some of our smaller teams and if it works out well we'll probably move that to our Indie team in order to achieve, make that easier. Like Sam said, to reduce that time it takes to be responsive so it's not on strict sprint boundaries. Okay, thanks. That's interesting for all of us operating on different projects with broad and separate teams. Yeah and if you see things that are working we'd love to hear the best practices. I haven't seen, every open source project seems to struggle with this based on where they are in maturity. I know the Southern Foundations looked at encouraging organizations to sponsor developers and if we can get enough organizations to do that then maybe we can pull a team in from across organizations to have people working on a common goal. But the scratch your own itch approach that most open source communities follow it can be hard because everybody's got a different itch at a different time and their own priorities and trying to get those to interlock there's a lot of inefficiency. Okay, any other questions for the Indie team? Just a request from the identity working group. Again, similar to what Mark was talking about we need closer involvement. I know that you guys are heads down building but we would like more input into the document and especially since Indie is the premier identity management system and identity handling system in Hyperledger that would be a great thing to have. Calls every other Wednesday at 12 starting if you start next week then you 12 Eastern. Perfect, we will work to get a representative there. In the ID working group one of the things that we would love to see in the Indie project we have some cross-line organizations like the Diff, the Distributed Identity Foundation that can also participate in some of these things as well. So one of the things that we'd like to do in the future is draw them more into their membership and in the Hyperledger project so that we can coordinate efforts and be able to share stuff there as well. Okay, great. Well, thanks for the update, Sam and others on the Indie team, Richard. Our next item of business here is taking another look at the updated supply chain proposal and if I can flip through all of the too many windows that I have open I can drop that link into chat as well. So it's available, it's also in the agenda. So I guess I'll kind of get the discussion rolling on that. We had feedback two meetings ago from the initial draft of the proposal that largely surrounded scope looking for more detail and then discussion too on where things like application boundaries come in versus platform or framework or SDK or all of the other many words that could be used to describe some of the software. So there's a significant amount of new content in the doc that was posted at the beginning of this week and we had some more follow-up questions and comments from Mick, Hart, Bin and Balois and I think those were all, at least attempted to get addressed before the meeting but with that as a little setup for it, just jump into additional verbal feedback. I guess the main question I had would be, last time we discussed this, is it, you know, I think you've made it clear in the document that this is not an application but there was a question of if we go to the board with this right and see if this is something they wanted or was that am I confused? No, you are not confused. So there is more detail in the document that might not have been apparent from the first reading and I think helps clarify some of that. There's a couple of things that will be going on with the board. One is I expect some communication from the leadership before the next board meeting, clarifying things that might be germane to this proposal and then at the next board meeting, this is an agenda topic, not this project specifically, but the charter so that we have maybe a clear way of understanding that for future proposals and that's about a couple weeks out. Sounds great, thanks. So I'm gonna, I'll go back and just ask my question from last time as well and this has sort of been the focus of most of my comments on the paper on the proposal for it. Can you help me understand where this project ends and where the application begins? When you think about the components, there's lots of nice words like reusable data types and other things like that. How do you see those being constructed in a way that doesn't bleed them into what we consider application space? Yeah, and I think the other proposal contributors are free to chime in here, but the way that I think about it is there's going to be example use cases that inform the design as we do sort of the traditional analysis of what are the data components that are common across these use cases and we look at the applicable standards and so forth that are gonna define what those data structures look like and formats and types and all that. We think about what does an application mean in blockchain? Well, that gets to be a little bit of a philosophical question but one way you could look at it is what's the user surface? So when you've got somebody who's actually trying to transact with the system, they might have a client CLI, they might have a web app that they're interfacing with the system, they might have other backend systems within their company that want to upgrade up with this. And so those things aren't the focus of this project. This project is mostly to help facilitate making those things easier. So if somebody wants to be able to compose a few operations in a supply chain that might say divide up some resource and then re-aggregate that and other components into some new product, those kinds of operations should be clear and straightforward using this set of libraries, set of data types. So, I mean, that's still sounds very generic, right? Maybe a better way to approach this is, Saatu Authority has kind of a sample app that's the trace and tracking with a couple of things related to that. Which of the modules do you expect out of that? Will be part of the platform or which of the modules would you consider sort of sample or demonstration capabilities for this project? I think one concrete example would be the interface for track and trace for the web app of it. We did versions of that that were focused on fisheries and then a more generic one what you could do like airplane park tracking and one of the contributors made the observation that there really wasn't a need to recode those interfaces each time if you had another abstraction. So one of the links that you'll see inside this proposal is to client, let's see if I can find the right words for it. It's the universal client RFC. The RFC is where that universal client is defined but that would be an example of something that's inside this project. This idea of having something that you could easily generate more specific user interfaces with but those specific user interfaces wouldn't be the focus of this project. We might include some as instantiations as examples but those wouldn't be the core goal. More so would be the goal of making it easy to generate those kinds of things. And those changes were put in just late yesterday, right? Cause I haven't, I mean, I looked and saw some of the new links but I haven't had a chance to really review it. So the links went on Monday, yeah. Okay. What is the, again, that seemed to be focused on what's kind of not in. Are you expecting to describe and prescribe data types for things, widgets for components, for composability, for location, for ownership? I mean, but again, I'm just trying to figure out what is if AcmeCorp comes in and uses this, what is it that you're providing and what are they building on it? Clearly the sort of web, you know, the logoed web front end is out of scope. Are they giving you a specification for an asset, for representation of an asset or are you giving it to them? I think you could say that we would give one possible implementation that may conform to existing specifications that they already use in their legacy or current state environments. Like asset definition is something that their PFS systems, if it's like a fixed asset or physical asset that their PFS systems or other types of asset or manufacturing or logistics systems would be managing today. And like there are industry standards for asset frameworks, just as an example, and like implementing those as a protobuf in line with primitive definitions that have also been established as part of this. So like this is how we all agree that we will handle Booleans, for example, would be a service, a potential starting place or as a wholly reusable component in whatever application they are ultimately building. So like defining a data model for asset and then so that everyone who wants to build something that tracks assets in a supply chain doesn't have to rebuild that from scratch, especially if it's grounded in an existing consortium or industry specification is the intent here. I mean, you could like things that I don't know that we're like we're looking at immediately, but we're certainly aware of is like there's a new open data initiative with between Microsoft SAP and I think Adobe, and they're gonna define some core reusable shared data models that are just gonna be open. And so implementing those open data models for things like a customer, just as an example, might be something that could be contributed to a supply chain framework, such that people that wanna deal with customer entities, customer things, customer nouns on a supply chain could just, rather than everyone having to re-implement that, that doesn't make it an application. What that makes it is like an includeable component in whatever app you're building. There are plenty of other industry specs and like the right way to manage the library of conformant nouns will be something that the project will have to consider. So I think another general area, so there was a lot of good specifics in that, but if we think for us what might other projects in Hyperledger wanna be considering, some of the feedback that I heard ahead of this proposal was that when developers come to Hyperledger, there's not a whole lot of individual libraries and reusable components, there's a lot of scaffolding that each of them has to rebuild. I don't think it's too much of an overstatement, but one of the, you know, I think a lot of the experience with coming to one of the platforms is that, all right, there's a very primitive key value pair. You know, that might be instantiating different styles across the different platforms, but there's not a whole lot of higher level data types and models that are already prepared for them. And so this would be, this is one set of areas that would be value provided by this kind of project that here's some scaffolding that you don't have to create when you're going after a supply chain project. It sounds like some of this is sufficiently generic to be useful outside of supply chain when we talk about standards around key value pairs and generic assets, which there definitely is, I think there is a carve out for that within the sort of abstractions that we tend to use when we're working on a blockchain. Yeah, just- That's where that remark leads me, but. Yeah, I don't know if you were reacting to what I was just saying, but what I was saying was that that very most generic key value pair kind of primitive, that's mostly what's available to developers that come to Hyperledger right now. And so this would be an effort to get more scaffolding above that. That makes sense. So as part of the effort of integrating some of the functionality that was part of Composer into Fabric, we're actually going beyond that, right? I mean, we have this notion of transactions and assets that are being developed, but it's all based on, I mean, it's all very generic still, right? It's the define a contract, a transaction, you know, that kind of that level, which is still much lower level than I think what you're talking about. And so I practically speaking, do you guys, can you tell me, I mean, what are the, what are you going to actually produce? I mean, is it just like data format kind of things? Like, you know, what is an asset? What kind of transactions you can have on these assets? And I mean, practically speaking, how do I, you know, if I wanted to use this with Fabric, what do I get really? Yeah, so I think if you wanted to use this with Fabric, there's at least two, actually probably three pieces that come into play. One is the contract libraries will be built at least initially focused on Wasm. I don't know for certain that they would never extend outside of that, but that would be sort of the narrow waste where if Fabric, you understand from talking it, talking with secondhand people speaking with Chris at not this pass hack us, but the one before that there's interest in adopting Wasm interpreter. That would be one place where, you know, if that coupling is there, then you're pretty much good to go. And then the other two things sort of follow from that, that there's the data models, data types, and then potentially this universal client generator. But all of these things, I mean, when you say, even if we implemented a Wasm sheen for Fabric's chain code, I doubt this would be enough to, you know, run whatever a smart contract you guys are developing based on, I don't know what model of chain code or smart contract. I mean, as you know, I mean, you know, one of the thing that puzzles me in this whole thing is it presumes that somehow we have a level of interoperability or portability from one platform to another that just doesn't exist at all today. And I don't know how you make that reality. I think that's a very good point. And I think it's definitely relevant to be thinking about Wasm here. I think, I was thinking about this in the context of Borough and one thing that got in my arm with the Ethereum community is they're talking to a subset of, hello, got someone trying to get in? Yeah, I think somebody's got their phone too close to the microphone or something. Oh, right. So, yeah, Ethereum moving over to some form of Wasm, but if you, let's put aside for one moment that we don't have that quite yet, although they've got Saber and Sawtooth. It's interesting here that Fabric's thinking about that. There's two areas where you can imagine putting in a hook for, so there's the layout on within Wasm of certain base structures. There's also, if you look at the way that Wasm integrates with the browser, you get kind of callbacks and you call it a particular address, like a special reserve address. Wasm doesn't make any assumptions about what the layout of the call for that is, but you could imagine, for example, if you had a like packaging function that describes how an asset is packed, you could imagine hooking that in and you could kind of get a collection of sort of supply chain op codes almost, which would be kind of interesting. But I agree that some of this does seem to presuppose that we have a better level of portability between the projects, but it would be nice to have that for sure. Yeah, I think it doesn't necessarily presuppose as much as it helps march us in that direction. These things can be kind of a chicken and the egg problem. So I think that we could look at this project as one way to help move that dialogue forward. Okay, but so then I, you know, I would like to commit it to work on this other than on SOTUS. I'm sorry, could you repeat that? You got air horned. Yeah, I'm sorry. We have some soccer supporters or something in the... I was going to ask, I mean, who is, I mean, who is committing to supporting one more than one platform in this particular project? Because, you know, I'll be honest with you, as you know, I mean, we have had, and you know, I'll be honest with you, I share, you know, I give presentations at Hyperledger all the time. And I always have these slides with the Amberida stuff, right? And I talk about all the frameworks. And I say, and then we have this layer of different tools and technologies that are supposed to be platform-independent, framework-independent. And then, you know, when people ask you, you always have to, well, yeah, this only works with fabric. Oh, yeah, this only works with fabric. And this, yeah. So, and I find, you know, this a bit annoying. And I hear you specifically, Dan, I mean, for instance, you quite rightfully, I think, ask people, what are you going to support SOTUS or some other framework? And the reality is we can't force people to do what they don't want to. And I start questioning the approach we have at the TSC level when we endorse these projects that are meant to be platform-independent when they really are not just because even if they could potentially be, in reality, there's nobody who's going to do the work. And you could argue, well, it's a chicken and egg issue, but experience shows that it doesn't really work to put that call before the horse. Unless we have people who say, yeah, I'm actually going to work on making sure they support this other platform too, I don't think we can bet on the fact that is somehow people are going to show up to do the work. Yeah, when I think about that problem, I kind of think from layers underneath with projects like Explorer, when somebody comes to the community, it looks like, all right, here's a tool that is meant by its actual charter and so forth to support all of the platforms and represented that way. I think a different view of that is what's happened with Burrow where sawtooth contributors saw value in having this kind of interpreter and everything that it brings with it. It worked pretty readily to integrate that in. Fabric saw that same value in contributors there and looked to bring that in. So one way we could look at the supply chain project is that, hey, this does seem to have value, well, we're gonna find resources to integrate this on or that's just a project that's waffling around out there that's not worth our time. Yeah, but there's a big difference, right? Because Burrow in and of itself can stand, right? And we have similar now, you have integration with Indy and stuff like this, but it's the same. It doesn't have any dependencies and then we can have these integrations, which I'm sure everybody thinks this is a good thing to have. And I definitely want to keep encouraging this. But this is a different beast because, and again, by the way, my opinion on whether this is in scope or not has not changed. I don't think, I just really don't see this in scope at all. I didn't expect us meaning hyperledger to get into that realm at all. But that put aside, it's a different layer, at least we can agree to that, whether it's in scope or not. It sits on top of something else, it cannot stand on its own. So it's very different from the situation described in the report. Okay, I see Vivin's hand raised. Yeah, just want to step back a little bit here. So, you know, first of all, there is this avoidance of the third rail, which is the application concept. Then there is this whole thing about abstractions, data abstractions, other kinds of abstraction, interaction abstractions, which are generic like many people said, including Silas and R&O. So we have the tension between these two approaches, right? I mean, so the point is that, by taking us example project like supply chain, you might stimulate development in all of those different areas and also on stuff that is using those abstractions to create other primitives that may be reused elsewhere, but also building higher and higher up the stack until you have something that could be reused for, I mean, concentrating purely on the supply chain stuff, for example, there are standards around bills of lading, around various documents that are present in supply chain. Those are well-established standards. Now the primitives that operate on those standards, like for example, the atomic elements that operate in those standards may not be as well-specified, but still, so if something is going to be built layer by layer like this and released as separate libraries, then they could be very useful. As to whether it would interoperate with different DLTs, you know, I have to disagree with Arnold that there are, like Caliper, for example, is attempting to do just that, to interoperate with different ledgers and they have already established some methodology for doing that. Maybe we can learn from that. So the whole point is, you know, don't knock this as a purely application level thing because without use, you know, this whole thing becomes, you know, like just saying that if we are only one of the bottom most layer, then, you know, people come to this and say, how can I use this? And it requires like five months of work in order to get anything useful going. So to have things built on top is always a good thing, especially if it's designed with, you know, wider usability in mind, both in terms of the number of DLTs and also in terms of the use cases. That means not just supply chain, something else like a payment system because some of the supply chain stuff, like, you know, trade finance use cases could benefit payments. So there's a lot to be said for all that, but, you know, again, coming back to the resource issue, you have to think deep and hard about all this but I wouldn't stand in the way of something like this getting incubated, even though, you know, people would say, oh, it doesn't map well with the charter, but the charter is written in 2015 and it's now 2018. So maybe the charter itself needs to be revisited. I don't know. Okay, but that's the boss to decide, right? We're only here to implement it, not to redefine it. Oh, that's not true, I don't know. We have a say in it, too. And we've already done that. Things like Explorer, I think, are another example of things that sort of cross that, are sitting at least right on that edge there. So I don't feel badly about having the discussion here in the TSC. I do think that we're conflating a number of different issues here into one discussion and it might be good to pull these apart a little bit. I think issue number one is, what does the charter really say and what are we supposed to do? Thinking about applications, again, you know, personal opinion, I think that thinking about applications that enable broader use of blockchain in this space and the hyper-legislative role in that is an interesting discussion on one we should have. Whether or not they need to sit as projects in the same vein as Fabric and Sawtooth and Rohan and all the others, that's for the board to decide not me. A second issue that's coming up, I think are the very specific details of the proposal. And again, I will say, I'm not comfortable with the level of specificity in the proposal at this point as to, again, I'd have a hard time saying which parts of that proposal are specific to supply chain and which if I just lifted that text up and said some other application domain would continue to be valid. So I have some very specific issues about the details in the proposal that I'm still not comfortable with. The third issue is one that we come back to over and over again, which is we like to claim cross-platform for these sort of component operations, but it's been very difficult to actually realize that in any real sense. And as Arnaud was pointing out, I think we need to kind of go one of two ways on that. One is is we better make sure that there are commitments ahead of time for any of these things to claim to be cross-platform and make sure that the commitments are there before we start or we just drop that requirement. And if we're going to apply that criteria going forward, we need to also apply that criteria looking backwards and make sure that the rest of the projects that claim and were approved with some assumption of cross-platform are actually demonstrating the commitments to do that. So I'd like us if we possibly can to pull these out in the three separate things because pushing them together, it doesn't feel like we're making much progress in getting any clarity on this. So my request is we focus on the specificity of the product of the proposal because a lot of those other things are separate discussions that are much broader than the proposal and we should tee them up for part of our TOC discussions. Okay, so quick reactions to those. I think for your third point, the commitment from this project is to build everything behind the WASM interface. So anything that's implementing WASM, that's sort of the requirement for this project. It's not intended to be anything sawtooth specific or fabric specific or anything like that. It should all be shielded behind that WASM interface. So then why don't we make this proposal about a generic WASM interpreter comparable to the borough EVM capability? Well, it's more than that though because it's also trying to address those supply chain operations. And then to that specificity, I would ask you to do a look at the additional content that was added. There's links and sometimes people eyes glance over that there's links but there's actually some design documents back in this that I would think are at least as specific if not more specific than other proposals we've seen. So yeah, I think there are some useful, having looked at the RFC that clarifies to me a little bit better what this would be about. I like your breakdown there making terms of the different issues. I think that the idea of it being a generic WASM interpreter would not be quite right. I think WASM standards as the structured output of the project in terms of what tech it actually produces is kind of in the right direction. I might suggest that we could look up one higher level. There's some work on a thing called WebIDL which is an IDL, a la proto-buff IDLs that has some ongoing work that can generate WASM which is a fairly low level structure not entirely an interface you'd want for the level of describing supply chain data types, WebIDL like I say it has integration hooks that can be generated, generated code and has some support in I think some interpreters but you can have generic lists and it's just definition language. And I think that perhaps I also agree that we need to hold projects to account or drop the requirement that they are really cross project. I think asking something like this to own future integrations into projects is probably a bit much. I think there needs to be some organic interest from contributors and community of those projects to do the integration. But I would like to see maybe as this is a somewhat trailblazing proposal and project if the output of this was some form of IDL or maybe flesh out what we mean about WASM then that would be a way of meeting in the middle. So it wouldn't be specific to sawtooth and it would produce some data artifacts that other projects could look to to generalize. So for example, WebIDL, it would be possible to potentially generate EVM code from that. It potentially possible to generate other things. So that seems like maybe a nice compromise. Okay, well, thanks. I think that's a good note to end on. We are out of time here. We'll try to assemble some notes out of this discussion and we will hear everybody next Thursday. Thanks. Thanks, everyone. Thanks, Al.