 This is your burden going forward, so congratulations. In that transition though, I will point out there is a stipulation in the charter that the TSC will migrate towards a technical community TSC based on the project leadership in the after the first six months, and so Chris will be the TSC chair for the first six months, and then once we get the new TSC seated, we'll want to talk about a timeframe for electing a new TSC chair to keep things going. So with that, congratulations Chris. As Todd mentioned, we have a quorum so we can get started Chris. The only agenda topic we had for today from the last discussion was to discuss the proposal. I think there was also some discussion on the list that probably weren't a quick overview of some of the discussions that happened prior to the public announcement. I think some were privy to those discussions and others were not, so it might catch people up. And just of public announcement, we are working on the flak and mailing list integration. I would expect sometime later this week or early next week we'll have that up and running, but we are working on it. And I think that's all the announcements that I had up front. So Chris or Tamas, I'm not sure you both sort of chimed in on that proposal. I'm not sure where you want to start or who wants to take the lead, but I'll turn over to you, Chris, since you're the chair now. Chris, are you there? Can anyone hear me? Dual mute. Okay. I was double muted. I only unmuted half of it. Yeah, so I'll have Ben and Tamas present the proposal, and then we can open it up for discussion amongst those on the call. So Ben, Tamas, you got it? Yeah, I can start first, Chris. I hope you guys can hear me. I can anyway. All right, good. That's still not confirmation. So as we know from the data we had the face-to-face meeting, prominently we have from the contributors, we have two models in our code. So one, of course, you all know UTXO model from Bitcoin base, and then from IBM and probably others. We have the things we call state transition model in which we keep track of the states, and the transaction would mutate the state from one form to another, and an execution engine to manage the smart contract, allowing the application to inject code into the blockchain to drive the smart contract and the logic that's specific to the type of transactions that the application would want to do. So those are primarily two models from the code base that we learned from the very first face-to-face meeting. So this proposal is a collaboration among a number of code contributors trying to merge the two models together, perhaps in some fashion, but at first an attempt to support both and then see what happened. As part of this proposal is also get together for a week sprint, and then maybe a few weeks after that to see how we can take this forward, and maybe we meet adjustments and so on after that. What this proposal is saying is that at first to get started quickly, we want to put out two code base in the hyperledger repo. One is one from DAH, which is a UTXO implementation in Java that also makes use of blockstreams with consensus, and the other code base is the IBM's OBC, Open Blockchain Implementation written in Go, and that represent the state transition model. So we want to put both models out there because certainly from the members of this community, some of us trying to play around with the concept and POC or play around with some scenario trying to code up some application to understand the concept and so on. So at least it give us the place to start with, but we want to be quickly define what is the right code base for us, and that's the first sprint that we want together to work on this, it's very, very important. So that is the second point of this proposal is that trying to work toward that and using OBC as the base and incorporate the UTXO model into OBC as I presented the architecture during the phase to phase meeting, there's a number of flexible points in the OBC framework that we would be able to add additional models quite simply. So this is the exercise of that. So from the proposal then if you go down to, instead of reading the taxi, you go down to the bottom of this document, there's a diagram so I can talk about this diagram. And this proposal is primarily working between Thomas and a number of developers from OBC Group. So we imagine that OBC is the base code for this, then we would be able to bring in the UTXO model as part of the transaction interface from the API point of view. So going from the right to left, from the right hand side is the business logic, which is the application logic. And it can interface with the API layer. The API layer support code above that the application can interface with using GRPC or of course REST is still available using REST or we might be able to provide an SDK for specific languages for example Go or Node.js or Java may have an SDK that can be embedded in the application and that SDK will take care of the communication with the API and the infrastructure so that the application would not have to deal with REST or with GRPC. So that's the interface layer to the business logic. Hey, I'm sorry, this is Morali from DDCC. Are you sharing a presentation? Sorry, I did not share, I thought everybody has shared the doc, but I can share. Let's see, how do I do here? And where is the doc? Where is the doc? Chris sent the doc out a couple of days ago. The doc is on the main language and Tomas actually just went to the party. Okay, I hope that you can see my screen now. And if you're not speaking, if you can go on here. Okay, so let me go on then. So that's the right where the application interface with the API, then continue from the API. So if the API is UTXO, it's the same transaction structure that the API that the application sends in, the API will forward the transaction to the... In OPC, we call it PIA, which is a layer that does a few security check-ins, like if security is enabled, it would do signature verification on that transaction to make sure that it comes from a known member, so on and so forth. So there's a few check-ins going on before the transaction is sent to the consensus manager. And then from the consensus manager, then it will, first of all, it will hand it over to the plug-in. In our case, we have a number of implementations mainly around Byzantine fault tolerance. So it will hand over to the plug-in, and the plug-in will perform the consensus based on the algorithm that is implemented. The output of that is a plot of transactions in order to be executed by the environment, whatever that means. In the case of UTXO, that means that the validated nodes would validate the transaction correctness, meaning that it will execute the scripts, both the unlocking and the locking scripts on the inputs and on the inputs of the transactions, and also validate the output on the transaction. And then deposit that into the ledger. Now, how does it do that? So as we know from the execution model in OBC, there is a thing called chain code. And chain code is a clickable framework that allows us to instantiate any kind of virtual environment. And today we have one implementation for Docker container. So if there is a request to execute a transaction, meaning to validate a transaction, then it will just to see if there is a chain code for that available, the chain code container for that available. And if so, then it will pass the transaction over and say, you know, do whatever you need to do, and what you want to return is whether it's successful or not. So in this case, what we're thinking about doing here is there's a couple of approaches here. I'm going to describe one of them, and then during the spring, we can go into details into other approaches. But the one that I'm thinking of is perhaps a simpler one, is that for the chain code container, currently it is, as I said, it's Docker container. There's a layer that we call chain code shim on the container, and that chain code shim interface with whatever the language that we built the chain code in. Today we support Go, but we have internally, we also have Java and OTS support. So we can enhance that to plug in the DSL interpreter from C++. We can plug that in, and that layer is responsible for executing the scripts coming from the transaction, namely the locking and unlocking scripts. And it will validate that, and it will return the appropriate status, whether true or false, successful or failure. And at that point, we can decide whether to append the transaction to the log or not, to the ledger or not. And that's it, we can continue on. The other type of transaction, the normal OPC transaction flows through exactly the same concept, but instead of going to the DSL interpreter, it will go to a chain code. And it will execute it the same way, return code, and would do exactly the same thing. So that's very much the overall concept. I'm going to pause here to see, Tamas, anything else that you would like to add? Hi, Lin, hi. Wow, that's a very bad feedback. Tamas, everyone, please go on mute. Tamas, if you can try that again. Lin, please. Can you hear me now? Yes, I can hear you. Great. I'm sorry, I had a wrong setup here with my mic. Thank you very much for this. I'm sorry, we have a fire alarm. So thank you very much for this introduction into the technical details. I'd like to take the opportunity to step a bit back and talk about the motivation of why we made this proposal and how we see these two stacks. First of all, these two candidates have quite different origins. Our candidate is originated in the experience with the Bitcoin network, which initiated the discussion about this technology. And your candidate is basically a rethink with the benefit of the insight of how this network eventually unlocks for the possibilities. And you created something that is a very flexible framework, a fabric that is friendly to technology explorations and to research, whereas our framework was tested and tried to develop or deploy applications in the financial services domain. It is a limited domain, nevertheless, it is a tested and working framework with a limited set of functionality. Why we were working with this technology? We actually learned that the flexibility of that technology is sufficient for most of our use cases. I would say nearly all of our use cases. And we also examined the technologies that are known under the buzzword smart contracts and chain code. We think they are very interesting concepts and we would also like to explore them with a framework that is able to support them. Nevertheless, if we think on all use cases in the financial services sector, we think that a network with the capabilities that are already offered by the Bitcoin network or by Bitcoin-like network are sufficient. Actually, the introduction of smart contracts and similar raises a few new concerns, increases the attack surface for such a stack. Therefore, we would like to cautiously enter this space and therefore submit a candidate for consideration of an implementation that is more building on that simpler but just tested technology stack set. I also would like to clarify a bit the buzzwords we were using in this discussion, such as the UTXO, or it is sometimes used to describe the entire alternate implementation that we submitted. That would be coming short of the recognition that it also submits an API which is very important for our business users. The UTXO is nothing else than a forced ordering of transactions which we think is useful. It is a very powerful method of achieving scale and the reason we suggested that this needs to be implemented in some kind of common stack because we think we are afraid that without that concept the scalability of the system would be endangered. We also requested an implementation of a limited script interpreter, a Bitcoin-like script interpreter as a chain code into this new stack because such a small language reduces the attack surface and we have the unique chance of using a time-tested cryptographic implementation of those cryptographic primitives that we can basically inherit from the Bitcoin project or LogStreams technology which was always a subject of the candidate. So these are actually our motivations to get the best of a forward-looking architecture and a time- and business-tested framework. I think I hope that this introduction to the proposal gives you a bit more context and thank you very much again for the in-detail discussion by Ben. Thank you, Thomas. At this point I think we just opened up for the community. Any questions, comments, recommendations, advice at this point? And this is Chris. I've also just copied the document into Google Doc. I should have done that. I didn't realize that the archive strips off attachments. So anybody was looking in the archives, I apologize. I didn't notice that before. So now we have a link to the doc. It can be commented and I think if people want to have access to be able to suggest edits and so forth, please do so in comments and that way they get it. I mean we may add some additional letters but we'll have to sort of at least start with this. So that should just have hit the list a couple of minutes ago. So anyone on the call would like to ask questions? Hi, it's Richard Brown, R3 here. So first of all thank you for pulling this together and taking us through it. I'm supposed to apologize. I've not read the document in detail yet so it may be that you answer it in it in which case just tell me to read the manual. My question really is what I've raised a couple of times now but I still don't really have clarity on and it's what's the argument or the compelling argument for why we should be trying to bring these two code bases together. In my simplistic mind it's clear that they're designed to solve I think different problems and are optimized for different scenarios and it's entirely likely that both architectures are appropriate for the use cases they target. And I'm struggling to come up with an argument for why they need to be brought together. It strikes me that one argument might be if there were a use case that required both types of logic in the same platform or consensus between them. But it's not obvious to me that anybody has identified such a use case and maybe you have in which case, great. But I guess what I'm trying to ask is just because we can unify two different code bases doesn't mean that we should. So I'm just wondering is there a six explanation for why we think we should do this? We can try to respond. This is a show from digital asset. What we think is that the code bases are very complementary where OBC gives a very flexible framework with a nice composable architecture. So that's the first part where it really allows you to test out. For instance, if you look at the comments, there were great comments by Intel about choice of consensus mechanism. And diving into the OBC code we think that you can really disconnect. At the extreme you can disconnect the consensus mechanism and put, let's say, a whover work like mining consensus mechanism under it. So you can really experiment with the framework and try out different ways and you can see that with the chain code expressiveness. If you want to plug in the transaction serialization of a specific use case, you can do that. On the other hand, we're suggesting with merging the code bases is really just getting a first instantiation of a network up. So it's by no means the last instantiation or the only instantiation. But we're trying to kind of merge a balance between allowing this to be a product that has the outlook of developing a bright future that's very different from how it is right now. But also just getting the ground running and finding an initial proposal that will allow this code to be usable in the near future and that gives a sense of ownership to all four of the initial code proposals. So then it's just more around let's get something in place that we feel good with that we can start working from. But by no means is this the end goal and the merges is not just a forced merge to create something. It's really to be able to start the discussion from somewhere through coding and pull requests. So that was the rationale behind that. Okay, thank you. Chris, this is Kelly Olson. I was wondering how this system deals. Kelly, you're extremely famous. Are you on a speakerphone or? Can you hear me better? That's somewhat better. It's still a little bit faint. Yeah, I was asking. I'm sorry if my headset's dying. How malicious Docker images are dealt with? That's an interesting question because I am right now still at the IBM Interconnect conference in Las Vegas. Yesterday we spent about an hour talking with Docker folks at the expo here. And they told us the current version of Docker has the capability for us to really shut down all activities from the container to the host system. So the default allows additional IO and things like that. But from the configuration, we can tune that up to really shut the door. So they told us that there wouldn't be any problem in sandboxing any piece of code to run in a container. And they wanted to comment and help us out. And the gentleman that we talked with are actually in the same town as I am in Derm North Carolina. So that's the very positive answer from them. Certainly we have to do our own investigation as well. But that gives us a very good confidence on where the technology is at this point. Okay, yeah, I think that's something that remains of concern to us. Because there have been privilege escalations and arbitrary code executions that have been able to be enacted over the past year. And so it seems like that could compromise the Byzantine fault tolerance of the system if a malicious Docker goes out to the entire network. Hi there, this is Igor Lelich from Consensus. I just wanted to offer a quick comment. Earlier there was a suggestion that most use cases can be recovered by a UTXO based architecture. I just wanted to check that a little bit because the concept of smart contracts as a buzzword doesn't jive with a lot of the English clients. I think there is a huge interest from industry in smart contracts and how they can operate. So I just wanted to throw that out there to the group. Because we view smart contracts as a very integral part to these discussions. Well, I would not want to dismiss the notion and I'm also aware that there are use cases which are excellently addressed by smart contracts. And I'm also enthusiastic about its future users. It's our effect that so we can live without them and we can get quite far without them. And it is also true that smart contracts in general, especially implemented in a generic to incomplete language in a Docker container and so on, in their full flexibility as it is suggested, raise very serious concerns of security, stability of the network. I don't want to imply that we would not be able to solve that. But I think that this is an area of research that we are glad to explore with this framework. But it's an area which we cannot currently view as a production environment. And since digital asset, still talking for us or users, is aiming to deploy production systems in financial services companies using this technology, we currently do not see it as a viable choice in comparison with a limited execution language where the properties are well tested and the primitives are already time tested. Hi, this is David Vol from JP Morgan. Yeah, I just want to like to just comment that, you know, we too also recognize the strength and power of smart contracts. We like the security guarantees that one can tell the users of the applications that such a system can present. But, you know, we're also, you know, to Thomas's point, we believe that it's really important to get something going this year that we can demonstrate. You know, there's a lot of research done in 2015 and we're looking at 2016 to be the year that we get something into production. And to the extent that, you know, we have sort of a potential short-term strategy that can get us there, build our applications. And, you know, by the way, the infrastructure that we're talking about here, it's all to support the building of applications. And the applications, you know, if they have specific requirements around privacy and whatnot, you know, to get something going sooner and up and running that gives us, you know, some of the benefits of a blockchain type of solution, perhaps not the full range of potential at a smart contract space, now that's still a good place to start. But we would like to get to that later architecture. So the proposal has outlined, you know, I see a lot of benefit in this. We could get something up and going now and as long as we understand the migration path to a more smart contract-based architecture or longer term once we be able to prove out all of the points on that makes a lot of sense. But so I'm just curious, you know, if, you know, IBM and DH and blockchain, have you discussed a time frame where you think that second phase, that conversion, are we talking 2016 or do we think that's going to be in 2017 or 2018? And what kind of framework are we thinking about in that migration to the final stage? We want to organize a hackathon to speed up that convergence. But we would not want to define the convergence by a time point. But by the point where we are confident of being able to deploy the resulting stack in a production environment supporting the features that we currently still have and the non-functional properties that our current stack has or being probably better than our current one. Right, and I want you to clarify something here. And this diagram here might give it a different interpretation. The base of the OPC code is there and that base is there to support smart contract. You know, people still can write smart contract in chain code using Golang and soon Java and Node.js. Right, and the smart contract would be sent boxed in a Docker container or any other virtualization technology that the community would want to plug in because of concern of Docker container security and so on. But I don't think that that would be a concern going forward given the discussion that we had with the Docker fork yesterday. But again, things that we need to investigate for our own confidence in this. So the smart contract is there, UTXO model is there so that to support a variety of different scenarios. Certain scenarios, the UTXO fits quite well and it becomes very easy. Certainly, you know, one can do a chain code to do exactly like what UTXO wants to do but certainly not the same restriction from the set of OPCodes that have been proven and tested in the last six years from Bitcoin network. So that's why, you know, we want to plug in that DSL interpreter to be able to leverage that set of proven OPCodes that has already been running. But perhaps we could enhance its diagram a little bit to show that the existing chain code and smart contracts still there. I have a question on the selection of the DAH code base. If the goal is to merge the sort of security and battle-testedness of Bitcoin in the UTXO model, why was the choice made to go with the DAH software over the Blockstream, which is actually the core code base for Bitcoin? Well, the choice was not, as you describe, we actually have an integration with Blockstream code. We choose to submit a version of all code which is not using it because Blockstream submitted their own project and we would like to, although we have an integration in-house which is a bit of a deviation out of Blockstream code, we would like to achieve the convergence within this foundation between these code bases. We think that the cryptographic primitives, the DSL that we were speaking about from the Bitcoin network give a strong foundation, but similarly, or higher level API layer written in Java is a much better foundation for business applications than you can find in the very original Bitcoin infrastructure where you basically have just RPC cores in a very unstructured and homegrown manner. So we think that the combination of enterprise-friendly architecture for application programs and an integration of the Bitcoin-originated Blockstream code is a right way to go forward. We think that integrating this with IBM's very flexible framework enables lots of new use cases and it could actually be a template for similar implement, similar integrations. I don't think that the suggestion that we made is by no way the only suggestion to integrate into that framework. The same way we elaborate the possibility of integrating Blockstream's DSL interpreter, we could the same way elaborate integrating, let's say, Ripple's transaction processor. I hope that this Hyperligia Foundation proves to be a very healthy lab for these attempts and made the best integration in the sense of commercial success win attempt. Okay, thank you. The one other question I had was, I looked through the documents but wasn't able to understand it very well, but was around privacy and how those need to move into an off-chain transaction. Could you maybe talk a little bit more about how that works? Yes, from the privacy, the model that we have in OPC is quite similar to what we use on Bitcoin. The generation of the private, of the key of the public, key to be used in the transaction is different. We describe it in the white paper as well as the protocol spec. If you looked at the OPC doc that linked from the Hyperligia readme, you will find the doc right there. If you look on the screen sharing, you will find the OPC doc. And in there, there are documents that describe how privacy is managed in OPC. So, briefly, I can explain a little bit here, but I want to point out the documents are here that folks can read. So, basically, since this is a permissioned network, every member of the network, my members including the clients, whether it's an application, a device, a user, and nodes on the network, would have to have a membership registration with the entity we call membership services. And what happens is that it will generate a certificate we call enrollment certificate. So, each entity would have an enrollment certificate. And imagine that a client would have an enrollment certificate. And from the enrollment certificate, then the client can request for what we call a transaction certificate. The transaction certificate is what being used to transact on the network. And recommended to use a transaction certificate for each transaction that happened on the network. And the transaction certificate is generated in such a way that it contains various information in that certificate to allow us with proper authority. For example, auditor or regulator with collaboration from the member to be able to audit the records, meaning the transactions, but not other members on the network. They would not have the ability to link, this is what we call linkability, to be able to link these transactions certificates to an individual. So that ability is kind of like taken away from folks on the network, but only available to certain authority. So that's how we support privacy. So, transaction on the network is completely anonymous that no one would be able to trace back to the individual except the counter parties in the transaction. And of course regulators and auditors. Hey, this is Mick. At some point, Ben, I'd like to go into a deep dive on the membership service and the architecture for that with the expectations for on that. But I think that would be for another call. The question that I had for you on this one is similar to what Richard was asking earlier, which is, you know, all the documents that I've read on OVC, the current consensus mechanisms are PBFT, which is, you know, as you mentioned earlier, it's kind of state transition. But UTXO is really log and consensus around the logs. What you're thinking about the kind of mismatch in semantics between the two is that something you see as being a concern or are we just really defining the state transition as, you know, extending the log? So we look at consensus for a little bit different in OVC. To us, the consensus is like, especially our specific implementation of BFT, is that it's like a transaction ordering, a timekeeper, if you will. So this is a, you know, one could treat that as a black box. And, you know, you would send transactions to it. And the output of that is that it would give you a list of order transactions to execute or to validate. So that's very much it. Depending on, you know, it doesn't matter what is happening in there. The output of that is a list of transactions for us to validate in order. So because of that, it's kind of like, I heard an echo. So because of that, it seemed to me that it is applicable to whether it's a UTXO transaction or a state-based transaction model. Because at the end of the day, what we want is that there's a consensus of the network to tell us that these are the transactions to validate in that order. And that's good for whether it's UTXO or not UTXO. Well, we think that the UTXO, we think, first of all, UTXO is being unreferenced transaction. So in an avoidance of doubt, we are not thinking in the context of a cryptocurrency. It's not about unspent. It's about unreferenced. But in principle, the transactions formed by the UTXO graph which orders them by their content. And the existence, we think that the existence of the transaction in the ledger is the state change itself. The ledger basically progresses by transactions being included. And they can only be included in the order which is implied by their references between each other. We think that the consensus mechanism is working on a higher level than on the individual order implied by this. But it's basically in on batches. The similar matter then it works in Bitcoin. It is basically ordering blocks which contain eventually unrelated or related transactions ordered by the UTXO set. Okay, thanks. I guess looking at the consensus mechanism in OBC, I would have thought the Ripple model or something like that would be a more obvious fit for the kind of state transition approach. So this is Chris. So let me just take my chairhead off just for a moment and weigh in on what we were thinking as we collaborated on this proposal. As y'all suggested, the idea here is we just need to get things moving and provide us with a framework that we can evolve as appropriate and as the community chooses through the discussions on the technical mailing list through contributions and proposals and so forth. This isn't intended to necessarily reflect that this is the answer to life, the universe, and everything. This was intended just to get us to a point where we could start bringing together some of the pieces that had been proposed. We're not saying that this is exclusive of other potential contributions. In fact, we welcome them. It's just, it was something that we could pull together that we thought would give us a foundation on which we could build going forward and evolve as the community sees best fit. Hi, Chris. This is Dave Vole again, JP Morgan. Yeah, and that's very important for us as well because we have some technology that we would potentially like to propose. I'm hoping we might actually be able to talk about it tomorrow, I mean next week. Again, we like the idea of this flexible framework where we could test out different consensus models and test out different ways of executing smart contracts. There are some concerns around docker containers. A virtual machine still may answer some of those issues. There's a spectrum of pros and cons on the framework and it gives us some flexibility to do that. And as you state through proposals and contributions, we could evolve this and take. The one thing I think by choosing OVC framework, it does suggest that we would be doing most of our development in Golang, which personally we're okay with. But I think that's really the only thing that's kind of not really totally locking us in, but it's giving us a strong point in the direction that the strategic development environment is going to be in go. Again, I think we're okay with that. But again, I like how you describe that framework. It can evolve and through the contributions. The other thing I just noted, in this proposal, you do mention that you're looking for an available suitable venue in New York City. We'd be happy to host something depending on what, when it is, we've got several buildings here in New York City that could potentially host that. So I just want to put that out there as well. Thank you. Yeah, so putting my chairhead back on, I will just sort of reinforce that and say that I would fully expect and just to make everybody crystal clear on this, I fully would expect that anybody is willing to put forward any proposal and bring it up and we can discuss it. And I would certainly hope that this thing does evolve and that it isn't necessarily just a point in time thing. I think we're all coming from a lot of different perspectives. A lot of us have different use cases in mind and so we're going to have to figure out how does this framework address our particular use cases and so forth. And if it's not, then I think the right approach is to make a proposal that would help steer it in the direction that does allow you to satisfy your use cases and so forth. Any other, any other? Yeah, so I think that's maybe a good segue or building on the notion that in order for us to evaluate whether or not this is the right flexible architecture or whether or not the architecture is suitably flexible for the diversity of interest of the group, it's probably necessary to spend at least a little bit of time specifying what those requirements are. And requirements might be a little bit too specific of a term, but I think in the absence of kind of governing criteria about what the intentions are, then it becomes somewhat arbitrary about which decisions we make. So, you know, some higher level choices involved, you know, whether this is fully permissioned, and in the case that it is permissioned, it's a centralized permissioned model and so forth. Some of these kind of higher level things, and I think if our organization here can spend maybe the first sprint as you put it, rather than diving into a consolidation of code bases, but instead spends just at least a little bit more time being specific about what it is that Hyperledger provides that isn't already addressed in the broader community. I think that would be ultimately much more efficient and much more successful of a process. Help me understand what you meant by that, Les. I'm sorry, what was that? Your last sentence, I didn't really quite parse it. You're saying to help me understand what Hyperledger provides that the rest of the industry, or the rest of the blockchain community doesn't. I don't think I quite understand what you meant by that. Sure. As a community offering, what is it that our program here is targeting that isn't maybe already satisfied by one of the existing projects like BlocksRain, for example? Okay. Fair enough. So, one of the – I think that's a very good point. And one of the things that I had been thinking about, and I chatted with Mike a little bit about this in Slack, is that I think it would be worthwhile if we as a community were to be able to pull together a white paper that did pretty much, I think, what you just described. You know, IBM took a crack at that with their own white paper in the OBC. I wouldn't necessarily be presumptuous enough to say – and again, I'll have my chairhead on here. As an IBMer, I don't think I would want to necessarily say that's where we should start, but maybe we could think about collaboratively working towards, you know, harvesting either some thoughts from the IBM white paper. I'm sure others – digital assets, R3, Jim Morgan, Intel, others may probably also have either internal white papers or things that they've published. And we could maybe start collaborating on that. So, you know, maybe a subproject or a site, you know, a parallel project would be to actually start to formulate that collaboratively as a community. I'd be interested in seeing if anybody's interested in serving as the editor and sort of lead for that initiative. Then I'll jump up. Well, first of all, let me ask this. Does that make sense to people to start a project where we actually would start to pull together, you know, the sort of high-level use cases and requirements and map out a paper that describes essentially what we're trying to achieve? I think that will help us build consensus in the group. Stefan here from Deutsche Berse – I'm not sure if you understand me, but could we also include in that paper transition path from the starting point to the end point a largely digital decision pass that we know what to do and where to go? Yeah, I'll just comment. I do think it makes sense. I'm not ready to volunteer to edit it, but certainly happy to contribute. Any other thoughts? Richard here, three. So I can't volunteer to lead this, but I do think it's important that it's done. And my expectation is what it will yield is the process of preparing the paper will force choices. So either it will be far too broad and therefore unhelpful, or it will have to make choices about what kind of threat model this system is designed to engineer against how many users or what types of users are anticipated to be part of it, what types of agreements or contracts are represented, whatever it is. Driving some agreement, which will not be easy at all, but needs to be done. Driving some agreement on what the scope is. I suspect it will have two impacts. One is, it will define quite obviously what the platform is not for, which then leads to the discussion of, okay, there's likely to be a plurality of platforms naming it different use cases. And it will then also clearly then lead to easier decision making on some technical decisions. So I think it's very important. Thanks, Richard. So maybe let's put it this way. Since it doesn't seem like everybody's – I think everybody's sort of looking at this and saying, oh my God, I have a day job. Taking on a role of editor might be a bit much. Why don't we do this? Why don't we just sort of ask for people who are interested in participating in producing this sort of requirement slash in a white paper to sort of help us shape exactly whatever it is that we think we're building. Hi, this is David. I would definitely sign up for that exercise. Thanks, David. Anyone else? Hi, Stefan. I would sign up too. Stefan, thank you. This is Igor Lillich from Consensus. I can also volunteer some time. I apologize. It was a crackle. Who is that? This is Igor Lillich from Consensus. I can volunteer some time. Okay. So I see – also in the chat I see Richard, Kelly, and Tamash are also interested. And again, I mean, this isn't an exclusive list. It's an inclusive list. Obviously, you know, if others want to – I think that's a good start. And maybe, you know, from amongst those, you know, somebody will emerge and be willing to sort of take the pen, if you will, and help to herd the cats around that particular effort. I'm going to say everybody now. Let's take a tribute. Okay, well, this is great. Right, so let's do that. I'll go to the – I'll create a Google Doc and link it through the Wiki for those – I don't know if I mentioned earlier, but I actually created a Wiki, you know, that we can use as a project to collaborate. I'm hoping to both, you know, record the minutes of all of the TSC meetings, links to project proposals, and various other collateral that isn't necessarily purely code so that we all have something that we can – that we can flesh out. And again, it's a Wiki, so anybody can go in and edit and contribute as they see fit. Okay. So back to the proposal. So let me just see if I can't just sort of get a sense for, you know, whether people are comfortable yet or they still want to sort of think about this. You know, again, just as chair, I'd really like to see this project sort of, you know, get moving and, you know, get beyond just, you know, just the sort of the requirements-gathering phase. I do think that there's work that we can do to start setting things up without necessarily getting completely locked and loaded. You know, there's a bit of work that's going to have to get done to, you know, build out a CI pipeline and so forth. And that obviously, you know, will have certain dependencies on, you know, which language or languages we're using and so forth, certainly at least from a test perspective and whatnot, but, you know, we have, you know, lots of things that we need to get rolling on and, you know, oftentimes, you know, the best way to sort of kick off something is to pull everybody together and, you know, start both, you know, socializing and getting to know one another but also to sort of, you know, if we can get face-to-face in particular on an initial sprint, I think that will go a long ways to facilitating the distributed nature of the project just by virtue of the fact that people are starting to, you know, to learn, you know, who we all, respectively, are, what their skills are and so forth and people can sort of start staking out aspects of this project going forward. So, you know, ideally, we would have a particular proposal that we could start working on. Now, you know, you could say, you know, we could maybe recall this an experiment of going down an initial sprint of, you know, working on the joint DH and IBM proposal. If that doesn't seem to be working, we can just sort of decide, well, okay, that was an interesting experiment. We learned something and maybe we need to try another experiment. But I, you know, personally as chair, I'd like to get us all moving as opposed to spending all our time on conference calls. So, some thoughts on that? I mean, I just, maybe we could just go around the forum and get perspective from each of the members of the TSC and just start anyway. Hi, Chris. This is Pardha from ATCC. So I think I agree with your idea that I don't see any reason why these two cannot go in parallel while, you know, while some people are working on the white paper and another team is working to bring together this, you know, and work on this proposal that you guys did today. That's my thoughts. Thank you. That's from others. Yeah, this is Mike. I'm generally in favor of moving ahead very quickly on this. I think there are two concerns. One is I'd like to see what step two is that will encourage us to ensure the flexibility and modularity of the architecture that we come up with, but that it's not a single solution. Right. And, you know, my other concern obviously is the one that I talked about in the mailing list, which is, you know, moving fast is both a good and a bad thing. It's good because it forces us to consider concrete problems. It also can lead to, you know, the ossification of architectures too early. So I guess my recommendation would be let's move ahead, but let's make sure we have a clear idea of what step two is that would apply an appropriate pressure to ensure the flexibility of the architecture. I guess I let go of that. The thought going through my head is this document ultimately captures the vision and mission, I suppose, and ultimately the project definition is probably key for me. I'm probably relatively less interested in the code until that's done, although obviously I'll be paying attention. And the consequence of that is I guess we all have to be ready for the possibility that there's a quite significant change in the direction of the code when that emerges. And the only way that would work well is if the document itself is branded in quite real use cases. So we need to make sure there's something in place that doesn't turn it into a shopping list or an engineering wish list, but the decision implicit in that document are there because there's either potential user speaking or there's this evidence from the deployed systems just to keep ourselves on this. This is Stan Leverang from CME. I want to second-mix comments that if we're going to, and it makes perfect sense to start quickly, but if we're going to start before we have this white paper in place, we should really focus on making sure that the architecture is modular enough to allow for changing direction. All right, thank you. This is David Vohl, yeah. And I think that last point is very good as well. You know, you should move quickly and then just check and make sure everyone is comfortable with the flexibility of the architecture. And if we need to make some changes, then, you know, that will come out. But, you know, the way you describe that is, you know, we can look at it as an experiment and not be afraid to, you know, there's nothing wrong with failing as long as you fail quickly. So, you know, it's nice to get going. And yeah, and we don't want to get stuck in analysis paralysis as well, right? Because we could go back and forth forever. So, you know, getting out there, getting something, really testing it, failing quickly and then moving on, I think that's the direction we would support. Thanks, David. Others? I'm pretty sure I haven't heard from all 11. Yeah, this is again a short digital asset. I want to second that point once again. Our main concern was around the social aspect of making sure that this foundation gets to know each other face to face very quickly and really want to balance the two things of how to make sure that we're not siloed into a specific solution and allow comments. I saw the great comments yesterday. So, how do you put in different consensus mechanisms after we gather requirements which I know we're glad to give the requirements for our client base but how do we make it module in order to allow requirement gatherings for other use cases or adding variable transactions as a different chain code. But we want to have this as a stack that at least one instantiation gets into production or pilots in 2016 because that'll keep us all very focused on client's needs as well. And once we have the user committees as well, I think we'll reap a lot of benefits from that, from having a first instantiation. But by no means is the last instantiation. It really is just to get us face to face, collaborating by code and collaborating by Wiki and less by calls and design. Thanks, Joel. Others? Okay. So, I guess I think I'm hearing rough consensus that there is there anybody who would object to sort of moving forward with this fail-fast experiment approach to where we would, you know, both be starting to work on a modular and extensible code base and driving experiment to see if we can to integrate the UTXO transaction model into the LBC. And in parallel with that, to also be working on a white paper slash requirements document that outlines precisely where we think we want to go initially and what we think we're going to be building, you know, identifies in articulates to use cases. And we can sort of, again, I would hope that we'd all be paying attention to both so that we're, you know, sort of steering the ship in the right direction. But, you know, to David's point, I think it's important that we not spend all our time in analysis paralysis and I do agree with Shull that, you know, getting together and actually starting to work on code and collaboratively working on the, you know, the requirements, I think will help bring the team together and actually get this party started. So, let me put it this way. Is there anybody who would object to us proceeding the way we've roughly been describing here? Just with the emphasis that both of these are important and that the requirements is something that will set us up for the evolution of the project. Yes. Or my favorite. Yeah, I would agree with that. So, yeah. I'm just making sure that I understand what you're actually proposing, Chris. So, yes. I think we're nearly there. And again, I do want to emphasize that, you know, I think we all, you know, share a role and responsibility in helping to move this forward. And please do, you know, if you think that we're moving in the wrong direction, speak up and please do say so and then offer up, you know, some ideas about, you know, how we might either course correct or, you know, start to think about, you know, maybe calling this experiment something that we need to maybe, you know, stick a fork in. But, you know, I think, you know, the first step is always the scariest and, but, you know, I'm very hopeful. I mean, I think, you know, we've demonstrated as far as it, you know, we're all coming from places and yet, at the same token, I don't think we've gotten yet the situation where there's been any real, any real tension. So that's always a positive sign. All right, I think we have a plan. We have about 10 minutes left. I think there's a, I think there's a few things that, that I'm sorry, I'm pulling my thoughts together here. I think there's a few things just from an administrative perspective that I just like to get out. Todd and Mike, you know, I set up, you know, the wiki and one of the things that I'd like to have is the minutes posted. And I just like to also make sure that I'm not making a false assumption that Todd will be continuing to take notes. If that isn't the case, then I guess we'll have to assign a secretary or maybe a rotating scribe role. But you've been doing a great job for this one. I just wanted to make sure I understand, you know, what your role is going forward and whether or not, you know, in addition to Mike stepping back, you're also going to be stepping back and if we should be looking for a scribe or a secretary. That's someone from the LF. Well, Chris, so we'll always have you covered. Our ability to document some of the technical detail of discussion as you've probably noticed in some of the notes already is a bit limited. But, you know, if we start getting into cryptographic hash discussions, it's going to be lightweight. But it will do our best. And we do them through Google Docs. So, you know, if anybody has any updates or changes, just let us know where he's a comment in Google Docs or just read yourself. All right. This is like, I probably should have made that after in the beginning. Thanks, Mike. So, yeah, if I could ask Todd, if you could just sort of link the three, I think the three, four more, you know, project minutes into the Wiki. I'd appreciate it. Sure thing. No problem. And I'm probably also going to be bringing some additional people from the LF into the project to help run things. So you may see some new names popping up to help out on sort of the leadership side and just organizational operational side as well. So. Thank you. And one question, Chris, we did, I did bring or raise the discussion last week about doing a face-to-face. I know it came up earlier on this call, the possibility, I think, Dave potentially offered space. I know DTCC had offered space. But did we want to spend any time today sort of planning out where that would be? That was going to be the final piece of the puzzle. You know, I think, you know, David and J.P. Morgan and Chase kindly offered to host this. I guess the when is obviously going to be important. I'm trying to figure out when we might get this done. You know, I guess there's a possible, we could do something the week of this March. And again, I think a week-long exercise is probably what we should be looking to do. Is there in the Lynx Foundation, Mike, is there any kind of policy about giving, you know, certain notice before having a face-to-face to make sure that everybody has an opportunity to get all the requisite approvals and so forth? Is there any policy like that? I know a lot of... We don't have a policy, but it's just a matter of, you know, what will it take to get a quorum? You know, some projects, you know, you can throw together a meeting in three days in San Francisco and get just about 80% of the community for this group. You know, it's a couple weeks out or something. I think that's reasonable, but, you know, if anybody objects to that, now's the time to raise it. So next week is the 29th. The week after that is the 6th. Let me have the 13th. Is the 6th within the realm of possibility to be... Is there anybody who couldn't do the 6th? Anybody? I mean, again, it may be that, you know, those of us that are, you know, participating in the TSC may not necessarily be the ones, you know, maybe we're getting some of our ancient years together and camping out. So there's that. You know, is the week of the 6th something, David, that you guys could host? Or should we be looking at maybe the 13th? And that gives you an opportunity to figure out, do you have the space and for us, you know, collectively to figure out, oh, the course of the next few days. And again, I would hope that we could maybe do this by the middle of the week before the call next Thursday. Figure out how many people, you know, we think we might be sending. Maybe we should do that. Maybe we should do the 13th, if possible, and then try quickly within the first couple days of next week to figure out and have people sort of say who they think would be attending Mike or Todd, if we could maybe, you know, put out some sort of a survey or something just so that David and team can just, you know, find the logistics suitable for that number of individuals and that we can all start, you know, planning travel and so forth. For us, Dr. Berzer, this week doesn't work. The week after. Yeah. Sorry. Yeah, but it would put in 21st and then we might be available. I can't do that. I'm not sure. Was that the week of the 6th not work? Oh, the week of the 6th and 13th don't work the week after our availability could start again. Okay. The first is the Monday, I think. Okay, so the week or the 6th or the 13th work, is anything to that? It's difficult. Okay. They do work or they don't work? Yeah, I guess I'm confused. Yeah, he's saying the week of the 6th and 13th are good. Oh, okay. They're not good. Not good, okay. That's what I thought. I heard first. Okay. That's unfortunate. All right. You're on a speakerphone. Maybe if you could say a couple things first and say that if we got that right. Yeah. For us, those two weeks, they don't work. The whole team is basically off during those two weeks. Oh, the 6th and the 13th are not good. Yeah, they are not good. Those two weeks are not good. Okay, I heard you wrong. Okay. After that is good though, so starting the week of the 20th. That looks better, yeah. But if everybody else can do it, just see what we can catch up afterwards. Is there anyone else from the week of the 13th wouldn't work? I almost hate to, you know, I almost hate to sort of put it off yet another week. Chris, perhaps you can, I'm just going to doodle for this and each company will put it, because it'll be hard to find anyone, so that'll be, and we'll see who can do what time. So we'll probably be finding some people, so we'll need a few days to, well, we need a few hours to take a look. Yeah, we can set up a doodle poll and then just try to see who can do it. All right, so let's do that. Let's have a doodle poll for the, let's just put the three weeks, 6th, 13th, and 20th, and let's see who we can get and then we'll go with wherever we have the largest form, if you will. Okay, I think that's good, and let's actually try to close that out. Chris, just a quick question. Is this face-to-face only open for TSC members, or is this for the broader community? No, it's open to anyone. The intention is just to get the party rolling and actually have engineers sit down and start pulling things together. I mean, I think there's work to be done on the code base itself. I think there's work to be done setting up continuous integration delivery pipelines with Travis and whatever. And then there's work to be done on the use cases and the white paper. So I think that it's really as anyone is welcome to attend. I think we have to figure out who's going to come and probably need to have a registration and cut it off at some point because we can't just have an unlimited thing. So I just want to move on this quickly. So maybe if we could have a doodle poll, we all get back by Tuesday of next week and then we can start planning the logistics and opening up a registration. Thank you. I think we're at end of job. So I want to thank everyone. Thank everyone again for your support. I hope that we could all collectively be successful at this. So I think Todd or Mike is going to send out the doodle poll and we'll get moving. Thanks, everyone. Thanks, Chris. Yep, thanks. Thank you. Thank you.