 Yeah, we're only missing Emanuel and Satoshi at this point in the list that they joined. Let me find my agenda again. Thanks everyone for joining, and I'll apologize in advance for my lack of voice. I'm just getting over a week-long balance of people or something that I caught in Vegas. And apparently everybody else did as well in Vegas last week for the IBM conference. So thanks and welcome everyone. I think we have, I'm not sure if we have a completely full agenda. We'll see how this all plays out. There's a few conversations I think need to be had. So we have on the agenda thus far, we have a discussion about the face-to-face. I think Todd, maybe you can just sort of recap the results of the doodle poll. And we can figure out what the logistics are and nail that down. And then we're going to have JPMorgan. I guess, David, are you going to go through and present the JPMorgan proposed contribution? And then we'll have a discussion of the project proposal template. I'd like to add to that just a notion of a life cycle. I mean, this is something I think has been discussed a little bit on the mailing list, but I'd like to actually start to solidify that so that we actually have a formal life cycle process for this project. And then there's a number of things that I think are ongoing. Actually, we should add to this the requirements. I'm still looking for somebody to sort of cathode the requirements workgroup to gather up these cases and requirements so that we can then start putting them through a triage process to understand whether they're in or they're out or high or low or medium in terms of their priority. And then there's the white paper, which, again, I apologize, but because I've been flat on my back, I had really got a chance to turn around and get an empty white paper posted that we could start doodling on. And then there's the code of conduct that, again, I apologize, but I think Mick and myself have been sort of working behind the scenes and will try to get something to the list that we can all start discussing in short order. We can have a brief discussion about some of those things. Are there any other agenda topics for this morning? If you want to speak up, speak up. If not, you want to type it into the chat. That's also acceptable. Hearing none, I guess we can proceed with that. So maybe Todd, if you could sort of recap for us the doodle poll results and so let's start thinking about the dates of the face-to-face and logistics. Sure thing. So we sent out a doodle poll last week taking a look at the upcoming three weeks as potentials for a face-to-face. Based on the responses we've received to this point, the overwhelming majority seemed to favor the week of March 21st. So at this point, we would be looking at doing a three and a half day face-to-face, looking at doing Tuesday, Wednesday, Thursday, and then a half day on Friday. So that way, anyone that needs to travel in or out, it makes it a little bit easier on them. So at this point, as of yesterday, late afternoon, we're searching for space at one of the JPMorgan offices in or near Manhattan. Dave, if you have an update on that, please feel free to chime in, but don't want to put you on the spot. So otherwise, we'll definitely follow up with this group shortly and nail down firmer logistics so people can start making travel plans for this. Yeah, Todd. So yeah, I was able to talk to our central reservations, and in fact, we have space available for those dates. We're looking at leveraging our Metro Tech Center for Metro Tech Center in Brooklyn. It's just two, three stops off the two from Wall Street. And it can hold at least 150 or so. If we have something in the range of 100 people attending, then we can have what they call a cluster type of arrangement, but if it's going to be something above that, then it's going to be more of a classroom style. So we can work out some of the logistics around that, but it looks like the facility is available, and it's a really nice setup. So I think it should work out fine. Excellent. That's fantastic. So Dave, I will connect with you later today to nail some of this down, and then we'll be back in touch with everyone on the TSC list for the firmed up logistics so people can solidify travel plans and all of that. Sounds good. Thanks. Thanks to both Todd for running the poll and J.P. Morgan for offering host. This is really good news. Just on the face-to-face, again, I want to emphasize this is going to be a very much a working meeting. I expect there to be an awful lot of code written. 150 people would be awesome. I'm a little bit worried about cat herding that many people. I don't want to scare people away, but by the same token, I do think that it's important that we get the engineers who are going to be actually working on this together. I think there's going to be opportunity also for us as we add our face-to-face to be working on the requirements and potentially on the white paper. So I think there's two or three different things that we could be off collaborating on. So I think that people who are thinking of coming should think about what their role should be and how they intend to engage and participate. Probably around those three things. If there are other activities that people think that we should be looking at doing as part of the face-to-face, then we could discuss that now. And if you're not speaking, if you could please go on mute. That would be very helpful. Thank you. All right. I'm not hearing anything, so I think that's going to be it. So we'll work on getting the actual logistics set up so people can make travel plans and look at their hotels and so forth. We'll start building, if you will, an agenda and a set of actions. I'm hoping that for the engineering that we can have a set of items to be picked up and worked on collaboratively in a backlog. And so we have a little bit of work to do. And I think that we can use next week's call to sort of try to finalize some of that. Thank you. So next up on the agenda then is JV Morgan's proposed contribution. And Dave, you want to take us through that? Yeah, so actually Will Marino, and you can see him in the Martino. I'm sorry, Will Martino. If you see him in the attendees list there, he'll be presenting the deck. I'll just kind of, you know, if you could give him screen share capability. Great, yeah. So just a little bit of a background. I'll kick off and then I'll hand it off to Will. Can everyone see the deck before you start? I can, thank you. All right, great, thank you. Yeah, so Will's going to be taking us through some distributed ledger technology that we've been working on at JV Morgan. We've been doing a couple of different projects. And this one is particularly interesting. It has some very interesting properties that we're calling at Juno. There's also a smart contract language component called Hopper, and another component called Masala that handles sort of an EVM type of technology that was tested with this stack as well. And, you know, this is something I think the group started with around last September. So it's very well suited for high volume use cases. So, you know, a lot of the existing technology, you know, we've seen with blockchain, not so good at higher volume transactions. And some of the use cases we're looking at in the area of, you know, 200, 300, 400, 500 transactions per second. So this is this design that we'll be giving you a high level overview, you know, shows how we could approach that and some of the design decisions and tradeoffs, you know, that are required to achieve some of that level of volume. And I also just want to point out, you know, it's not finished yet. It's still a work in progress. But we've been having some very good results from our testing. And in fact, the stable version of the code has just been posted to GitHub. And so people could, you know, download it, take a look at it. And as we're, you know, discussing our future state and components and plugs in, you know, we think this is an interesting technology that people would be interested in learning a little bit more about for, you know, again, for specific types of use cases. So with that, I'll just hand it over to you, Will. Great. Thank you, David. I'm Will Martino. I've been working on the Juno project at JPM since around September. We were experimenting with other systems before that. For Juno, we began by thinking about, we began by going back to first principles with regard to what we actually need for a distributed ledger with a smart contract system. One of our beliefs is that we actually don't need capacity for anonymous participation and actually it's something that we really don't want, at least for enterprise applications. And because of that, we looked at alternative methods besides proof of work and proof of stake for finding consensus around, you know, a globally shared view of the world. And with that, we ended up looking at PAXOs, DFT, RAPs and much more other things. For Juno, we ended up going with a version of RAPs and why RAPs? One of the big reasons is that it was designed to be simple and understandable, which I have found to be very much true. PAXOs is pretty tricky to understand. It also provides us with consensus around inputs and not as much around outputs. So if you give it a deterministic state machine, it will give you a global order for messages that go into it, and if the state machine is deterministic, then it gives you some guarantees about where that state is. We'll get more into that a bit later. RAPs have also been formally proven. We were working on converting the Verdi RAPs implementation from COF over to Haskell and then actually ran into Tangerilla, which we'll get into on the next slide. But the other thing that we really liked about going with RAPs is that single leaders vary traffic for performance and system visibility, also for no forking. And it's also explicitly and effectively right behind, so that slow nodes don't slow down the entire system. You come to consensus around the inputs and the ordering of the messages, but you don't come to consensus around what the outputs are. So a single node doesn't actually, but definitely doesn't have to slow everything else down. And by BFT-hardened, we went with Tengro, which is a BFT-hardened version of RAP. It is developed by Copland and Bang at Stanford. It's a complete RAP implementation that has BFT-hardening on top. We say BFT-hardening because it's not a true BFT. It is BFT around consensus only. And it has lazy votes for preventing election clutter and also additional cryptos, signatures on basically all the messages. By BFT-hardened versus BFT, specifically we believe this is, so Juno's designed to be used on a non-public network, and both BFT is sort of a definition of application-specific undertaking. Tengro implements just the Byzantine fault tolerance around consensus, and application-specific fault tolerance can be implemented separately afterwards if needed. All the messages between all the nodes, between the client and the cluster, between the admin and the cluster, are all signed. And our internal version of Tengro, which hasn't been open-sourced yet, we just open-sourced the last stable demo version. That one also fixes significant issues with the Tengro code-based protocol, where a new protocol begins and enhance protocol. I'm not quite sure, but we'll be figuring that out as time comes along. With regard to performance, Juno offers considerable performance benefits over other solutions that we've been looking at. The unoptimized version of Juno that we have running on four nodes on a Macbook Pro has a throughput of about 500 messages a second. That's for consensus. Latency for consensus of around 2 milliseconds. So that is a time of fakes for an entry for the log to be received by the leader if it can be fully replicated and then committed to the log itself. And the throughput of the hopper language, which we'll be getting into later, seems to be about 1500 transfers a second. That one's completely unoptimized. I think there's a lot of room for improvement there. It's also worth noting that the version of Juno that's been open-sourced is based on GitHub, the links at the end of this presentation. We'll show performance about 10 times slower than the numbers listed here. This is mostly due to slow logging and synchronous program application in that version of the system. These issues have been fixed in our internal branch, but we don't want to push the release a little bit sooner than we wanted, and as such we couldn't get the really good version that we have internally out of the door just yet. We'll make sure that it's working before we actually put it out there. We can comfortably forecast getting a two to three times performance increase. There's some basic optimizations that we haven't even touched yet, like GC tuning, batching log entries, and message layer configuration. In production, latency is a factor of the longest network latency to a quorum note, meaning that you need a majority of replications before the leader can step forward. So whatever the longest time it takes for that evidence to come in is effectively the impact of latency by the nodes. The node count doesn't have a huge impact on throughput or on latency, really, mostly because it's mostly a function of this network latency to hit the majority, and everything else gets washed out because the servers can run through the proofs that are required through Tengrel pretty quickly. Some future work is to make it fully right behind application execution so that long-running programs then actually can't through the contents of computation. The system is also very amenable to a gossip protocol that we haven't said yet. Some novel system properties. So Raths offers a general state machine stepper. It doesn't need to be a deterministic state machine. Juno supports non-deterministic state machines, but it ultimately prefers deterministic ones as they underwrite the system integrity, because Juno and Raths and Tengrel guarantee deterministic ordering of execution, so deterministic ordering of the inputs that are going in, and it's with a deterministic state machine, and that also implies that deterministic inputs equal deterministic outputs. The output of the deterministic state machine is a bit, and this allows for a very cheap verification of the emergent application state. Hopper does this. We're going to get into it. I think I'm going to slide. It currently supports EVM, although that integration isn't there currently. The first thing it supported was schemes. Actually, we know that it would be quite easy to integrate with external non-deterministic state machines if you wanted to. You can think of that as just the inputs you're going to get a Python rebel, for example. The log itself is, you can think of it like a single transaction. You can think of it like a blockchain that has single transaction blocks, and there's no forking at all in the system, because the leader is giving absolute ordering to every single person. And every transaction is, the entire log is incremental hash, so every transaction is hashed against the prior entry in the ledger. Some future work is using a deterministic state machine and the difficult state. We are currently looking at how to offer a verifiable application state. We're thinking of hashing, doing the incremental hashing at the state-diff level, as well as the log level. And the individual diff, therefore, will verify the entire state up to that point. We have a few different options you're thinking of. We haven't decided on which way to go yet. The simple one is just to, every time, I don't know, every minute, every half minute, every time you need it, a check by message goes through the system, and the system comes to consensus around what the application state should be, the hash of that. Other ways for it to work are that the leader checks it, or that we actually include the four or five steps behind hash of the state-diff incrementally hashed and the appendentary response. We're considering a few options that we're not entirely sure with the right answers. As for smart contracts, we are also developing our own language. Again, by, actually, Stuart began by implementing the Ethereum like those VMs in Haskell, and we use that as a way of learning other things that we really like in that system and also realizing things that we would like it to have had. So the Hopper language is currently being headed by Brian and Carter Schoenwald. It's also been open-sourced at the link at the end. It is a system that's designed for a simple code for the next one, using one of two service languages. The service languages are on the side right now. Currently, transfer is a primitive in the language, and it's been made quite pretty for the first, the transfer syntax, the second one being the transomatic-wisp-like language. Currently, Hopper is a little more than a lambda calculus that has a bunch of stuff coming down the pipeline, but it's not quite done. It's still on its early stages. It does currently have very informative error messages of account on a calendar and special balance. It's inherently transactional, which is really a wonderful thing. There's no real control flow or observations in the current state. You just send in a program and then Hopper tries to run the program, and if it works, it works, and if it doesn't, then it gives you the error. It's fully deterministic, and the output is a diff. If it succeeds, then it fails, and it saves legible error. So in future work, we have a lot of ideas for the future on this, and we're working very hard on it. Ownership models that can be expressed in the language via linear types. Linear types are a wonderful thing because they will help, it will allow you to raise runtime errors that you would run into into the compile time, so it's bad you can't double-spend, you can't write a program that accidentally creates or destroys a money. Linear types. A true module system that's likely to be independently typed as well, and an execution cost model. Something similar to GAS, but more simplified. As for persistence, right now we have a few different queries that you can do that's ledger. Some of these are on the internal branch, some are on the external. Simple ones such as a range query where you say you want to find some transactions before or after between some log indexes. Clean queries that have to undergo full consensus. This is an issue that I believe was counsel or SDG ran into that Afer found in his blog, wherein you could get a dirty readout because the read actually has to flow through the entire consensus model to make sure that it's in a good state. And also just a final dirty query where in a local node just responds to you instantly. And there might be some in-flight log into that impact to read. We also have some ideas for some other more interesting types of queries. Something like a whole query where you ask a certain number of nodes what they think, and you say it was an entry that's respond with. And if that number of nodes is one, then it's a dirty query, and if that number of nodes is a form, then we think that that is a tantamount to a clean query. We're not entirely sure, we're still thinking about it. The data is in a key value format, so a key value store is quite easy to put it in. And our current integrations for log storage or SQLite is adding others in that heart. It's just we are currently in SQLite. As for privacy, we haven't fully figured out. I mean, the wonderful thing that we would all want is the ability for everything to be private, but for there to be mass conservation and everything that we need so that you can protect against double spend. And we haven't cracked that nut yet. I personally don't think that that technology exists yet, at least at the crypto level, but I hope that it does this at some point soon. Right now, we can persist non-transactional encrypted data alongside transactional code, and it implies external encryption shared between two transaction parties outside of the system. This future work would allow us to have local node encryption and decryption so that only some nodes would be able to see some person to the larger, and other nodes wouldn't be able to see other portions. This, to a certain extent, will destroy mass conservation, so making sure that double spend doesn't occur. We haven't cracked that nut yet. For local node encrypted smart contract execution, this is sort of running hot front and non-transactional mode. It's an even less protected version of the previous version or of the previous thing that I was discussing. We just compute the, we decrypt and compute, and there's no real assurance. At that point, Juno would just be a cryptographic, sorry, an encrypted crypto messaging system that gives you absolute ordering, but outside of that it doesn't give you a ton of guarantees. As for community contributions, we recently open sourced Juno, Masala, and Hopper. Juno managed to hit the front page of Hacker News yesterday and now has a bunch of attention. There's quite no house, and we found it. That one's under my name. Masala has been open sourced by Stirrup Hopejoy. That would be the EVM, the Ethereum VM, and Hopper has been open sourced under Hacker News. I believe that is the end. I would welcome any questions anyone has. Very good. Thank you. Yes. Yes, please. Yes, we have an appointment. If you're not speaking to the group, could you please go on mute it for a second? Yes. Hey, this is Deutsche Bosn. Just a quick question. Why are you guys using Haskell as programming language? If we can do it either one day before that would work, or if we can do it like this in the morning, that would work. Go on ahead and mute it, everyone. It was a phone caller, so I couldn't individually mute them. So if you have a question, please come off mute. Thanks. We may have to dry it in that particular mode, at least until we get the right sort of behavior. Thanks, Todd. So yeah, as Todd said, if you need to speak, please come off mute. So I want to thank Chief Morgan. I think there was a question on why Haskell in the... Yes, Deutsche Bosn asked that question. David or William? Yeah, we got this double mute thing. Well, if you want to respond to that one, you have to hit the little phone in the... up on the thing, but I'll take a crack at it. For one thing, you know, we've got some really bright Haskell developers, and they're very passionate about the benefits and all that. Another, and Will, if you're there, if you want to jump in here. Yeah. So the main reason that we went with Haskell, I mean, we already actually had an internal Haskell team that was working on a few other projects, but the main reason we went with Haskell is because the tango roll of paper was paired with a Haskell implementation, so we were able to get off the ground quite quickly. The demo that you see on the open source repo, we got that up and running... Actually, I got that up and running within about a week of running into the code base. I was able to do that just because I had experience working in Haskell in RAP, specific applications before. But, I mean, that's one of the main reasons. The other reason is that for language engineering, especially in things like hopper, Haskell is just the best. The type system just helps you the most, and it is basically the most advanced thing for doing things like that. Also, it gives us a lot of security, and honestly, it really helps with hiring. We're able to find really sharp developers who want to work in Haskell that we otherwise probably wouldn't be able to do that. We have a very passionate Haskell team at JPMorgan. Any other questions? Got a different question for you. What are the characteristics as far as the deployment and test? I know you're running a bunch of sort of high number transactions through. Is this local or global deployment? Right now, it is local. We have done most of our testing just on a single system. We, unfortunately, don't have the ability to do a geo-distributed or even really multi-server setup. Quite yet, we should be able to actually have the ability in about a week, a little bit less. This is mostly just your two internal things at JPM, but we hope to be getting this on AWS now that it's open source and testing on there and do some geo-testing as well. Great. It looks like great stuff. Anyone else? I have a couple of questions. I'm not sure whether you mentioned this or not. What language do you know it's implemented in? And also, you talk about RAFT, I read about that, but in what could characterize the deployment environment in such a way that RAFT would be good for? Okay. So first off, all of these things we discussed today have been implemented in Haskell. You can go and see all the code online with the links at the end of the slide. And the good thing that Juno is designed for is really specifically enterprise applications that aren't on the public-facing Internet. There are certain, I mean, Bitcoin-derived technologies have the ability for anonymous participation and they're also robust against just sort of people throwing garbage at it. Our version is somewhat reliable against that, but it's not... We wanted to get higher performance and as such we took out certain things. So it's really meant to be run either on an intro network or an intro firm-like setup that is where the pipes connecting them are, the least lines or our encrypted pipes are protected in some other way. It's really not meant to be, you know, it's not like a blockchain or a Bitcoin blockchain-like setup where you can just run a node and put it out in the Internet and it's robust against anything that can happen. It's really meant for enterprise applications where you don't want certain features with Bitcoin, but you do want higher performance. Does that answer your question? Is there still a conflict of blocks? Yeah, thanks. We... As you know, we go with PBFT because of a set of Byzantine conditions when nodes in different networks, you know, collaboration between among companies that are not in... if you will, in well-protected area in such a way that we might not have the security assumptions that we want the network to operate in. So I get... Okay, I mean, I think I get what you're getting at. There are certain... I mean, we have some forward thinking on dealing with some of the issues that I think you're discussing. One of the main ones is replacing our messaging substructure with something that's using TLS for much higher protection. We haven't gotten to that yet. For Byzantine failures, I mean, the Byzantine fault tolerance is this very large idea that has a bunch of different ideas baked inside of it. We believe that we only need to be robust against some of the set of things that Byzantine fault tolerance covers, mostly because we believe that in a enterprise deployment, the organization using the system are going to want to have a big red button that says send the system into read-only mode, something's going weird. You know, you have some massive network partition, you know, a shark bites the fiber-optic cable connecting, I don't know, Britain in the U.S. in half, and all of a sudden you want some backup system to be able to say, all right, we're going to read-only, something really screwy is happening, and hang out until we can actually have some human intervention. So, certain cases we are explicitly chosen not to care about are not to care about from an automated programmatic perspective. One of them is a faulty leader scenario, because that is always really application-specific and need an oracle for it and a few other things, and we basically believe that if this thing ever gets deployed and gets a lot of adoption, then there's going to be a button somewhere that says, all right, the leader's faulty, something's going wrong, so send in a command, take down the leader server, have the system locked in the leader. All right, are there any other questions? Do you still have the concept of blocks? To a certain extent, the reason I used blocks in my description was that, my description of how the log works is that it's easy to think of the way that the logs that we're using works as a chain of blockchain blocks, where there is a single transaction in each block instead of a group of transactions, instead of a set of them. The reason for that is that what RAP goes is it orders messages in the form of your log and gives them an absolute order, and so we do have a concept of a block, but it's really just a single entry block. So it's really an incrementally hashed log or an incrementally hashed list, but it looks enough like a single transaction block that we feel comfortable in calling a block. The one thing that we don't need to be robust against because we're using a leader consensus algorithm is having forks in the chain. It just can't happen. And if you notice a fork in your chain, that means that the notice report is probably faulty and needs to either go and fix whatever is wrong or it needs to go and read only and ask for human assistance. So is there a question in the chat about whether or not Hopper is turning complete? It is turning complete for a while. It's not something that... So our idea is that we want it to be turning complete, but we don't want full turning completeness because you don't want an infinite loop. So everything is bounded, and that's going to be verified at compile time. So you'll be able to do many high-level things. You can think of it like an IGDIS or Idris or an Agla or something. You know, the penently-typed functional language that can have multiple certain languages and you can compose functions and you can compose ownership and compose a bunch of other things, but ultimately it won't be turning complete in the way that you can do infinite computations. All the computations will define it, but they will be deterministically stepped in a very similar way to how gas is used in a terium to compute the execution cost. Okay, thanks. And can you send the deck with the links? I'm more interested in sort of chasing some of these things down. How do I do that? David, can you handle that for me? Chris, we'll get it posted with the minutes and get it posted to GitHub later this afternoon. Okay. All right, I think that like I know I was interested in I think others are all sort of just chasing down Hopper and some of the other code drops and so forth. Sorry, was that a question? I couldn't click here. Yeah, so the scenario, the scenario that you demonstrate, at least in the slides that you talk about, it seemed to me that also surrounding asset transfer, are there other scenarios that the environment could address? I'm sorry, your board, I couldn't quite understand what you were saying. Can you repeat the end of that one again for me? Yes, it seemed to me from the slides that you discussed, the environment seemed to address one particular scenario in my understanding is asset transfer. My question is, are there other scenarios that the system would be able to address? Oh, I think that you're discussing how Hopper only has the transfer command. Yeah, so Hopper is still in its early stages, but we have three very sharp guys working on it. And it is going to be... I'm having a hard time coming to the correct word. It's going to have just a few primitives that you can compose on top of each other and it should be able to model basically any ownership type situation. It is going to be a domain-specific language for effectively anything where you have assets that you don't want to create or destroy, but you need to be able to transfer ownership and grow and all of these other things. Right now, baked into the demo that's been open-source and into the demo branch that is out there is just the transfer command, along with a couple of the primitives, because it's still pretty early stages, but we're making huge progress in Juno... I'm sorry, in Hopper, and I would expect to see a really powerful release probably in the next three to six months that has heaped more features. But right now, for the internal use case that Juno was designed for, I'm not sure if I can say which one we're using it for, but maybe David will jump in and say. In any event, that one only required a single primitive command to actually do. And the system that was initially running... the system that it's going to be piloting for replacing currently takes a week or two to do a transfer of something, and ours really only needed one command to do this multi-way transfer. So that's why the demo and that's why most of the other stuff only has a few commands in it, because we just really didn't need them for solving the business use case where we were building Juno for. Yeah, and Will, I think, also if you want to just mention Masala, how we tied Masala in there and how that opens up additional type of... Yeah, and also we have Masala, which is, say, Ethereum by good VM, so anything that you compile to Ethereum, it'll support it from run. So anything that... any program you have in Ethereum that's in Serpent that you can compile down, you'd be able to put onto Juno and integrate it pretty trivially and execute whatever you want. We have other languages that we can tack on. I know there's a Python implementation of Haskell that we could probably strip down and use. There's a list version that was the first thing we got running. It takes me at this point about an hour or two to integrate a new language into Juno. Anything that has something that's like a REPL, I can integrate pretty trivially at this point. But it should be noted that deterministic state machines give you a lot more guarantees and we really prefer to use those versus something like a Python REPL or a Go REPL or something else. Great, thank you. I apologize for my audio. It's not very well here. I'm using my computer network. I have a follow-up question there. On the transaction, since you have a back-end interpreter or a whole virtual machine that will be able to execute things like Ethereum programming model, is your transaction a UTXO model like a Bitcoin transaction model that carries along inputs as well as expected outputs or your transaction model is kind of like a state transition model? I'm not too familiar with the UTXO model that carries things along. I have to be honest. I'm not too familiar with the other options that you had said. I can briefly explain something that may answer it and you can jump in with another question if I get close enough. One of the things that would work is that Hopper doesn't have the ability to do any I.O. So there is just this state machine that's on every node that's getting stepped through after the inputs have been committed and replicated fully and that builds up a state of the Hopper language. And at every point there is a dip of the state that comes out that we currently cache and can look at. So I guess the way that it works is the current state of the system and the new command to run to Hopper. Hopper takes the current state and the current command and then outputs the results of the command and the dip of the state and I believe the new state as well. And we don't do a lot of interesting things with that. We just sort of propagate it and continue it on. At some point soon we're going to be adding snapshotting so that when a new node comes up it doesn't need to replay through the entire universe. It just can get assigned by the quorum and time by 90% of the nodes snapshot of what the state is and just start from there after it's validated off log entries. There's a bunch of different ways we can go. We're not entirely sure where we're going with this. Now, does that get to your question? Yes, that's perfect. What I call this is state transition model which means that a transaction will mutate the state from one form to another. That's exactly the case. May I jump into here? Our notion of UTXO is unreferenced transaction output. So the first question is if you would want to compare these two approaches whether you or transactions explicitly define an ordering by referring to earlier transactions. This would be the first question to compare with the UTXO model. The second question is whether the state is entirely described by the fact that the transaction exists or the state is basically computed by the transactions. I understand that the answer for the second question is that you have to execute the transactions to get the modified state. Yeah, so basically in our sort of formal state that we're building up by running each of these programs in succession, that state contains the current state of the world of who owns what, whether it be assets or whatever you're modeling. And so each of our programs is going to be a transition from one current consistent state of ownership to another state of ownership. And our program is statically verified, at compile time, ahead of time, that the program can only do valid transitions from one ownership state to another. And so it's a little bit different from the UTXO model, but it's, you know, you can model the same thing and it's, you know, in some ways it's, okay. So I think there's another thing to add. Thank you, Brian. Brian's one of the heads of the language research. And I think that Hopper being that we're developing internally and we're giving it all this power and that it's, you know, really pure and yada, yada, yada. I think that we could add something similar to the, that, you know, maybe the UTXO model where you have things pointing at previous things, except we would do it at the depth of the output level. So you have some state, you put in a transaction, what comes out is your new state, but also at depth and in the output that whatever came out as. And we could technically do an incremental hash of that if you wanted to be able to tie inputs, outputs all the way through eternity. We don't currently do that. We have a lot of options and that's why we design a language like this so that we can add these features in as we see them being needed. But currently, whatever happens, it gives you consensus around the inputs and that's what we really want. So that if there's some high bulk time we can get consensus really quickly around the inputs and then the state machine can be running up as fast as it possibly can. So the real key is to make sure that your state machine can transition through programs faster than your consensus algorithm, given network latency, can actually come to consensus around things so that your state machine isn't really that far behind. If that got to your question, let me know otherwise. Are there any other questions? Yeah, that was me. Thomas from Digitas. Yes, thank you. That answers my question. Are there any further? I don't believe that was a question. Yes, I have a question. Can you talk to topology in terms of what are you thinking in terms of the number of trading partners, the number of servers, how often servers join and leave the network, and so forth? So servers, so in rafts, you can only bring up or bring down. It's only guaranteed to be in a good state if you bring up or down or out or remove or do a key rotation, it's the same as adding or removing a server at the consensus level one at a time. We test this pretty heavily at least on our local system by doing tons of terrible things to the program while it's running to see how it handles partitions and faults and other issues, and it seems to be, at least in our internal branch, very stable. As for node counts, we don't know. We need to actually get onto Amazon and start really hammering a thousand nodes and seeing what the performance is like we need to geo-distribute to actually do some real testing. What we're thinking now, at least at the beginning, is having any trading partner who we were working with would have between three and maybe seven nodes locally, and probably we think three nodes locally and then four is read replicas. In case one of those goes down, then if there's a real fault in the system, then you have a read replica that can just be a standby member and then come online when we advise, of course, human interaction if you need to bring down your server for maintenance or if the hard drive explodes on one of your cluster servers that one of your standbys will already be prepared and already have the state machine ready and everything else joined into the consensus for you. For otherwise, we have some interesting ideas for hacks for increasing performance even more such as keeping a quorum in a local data center and then having nodes external to that. The main reason for this is that that local data center's cluster will be able to get quorum very quickly and the ones that are sort of the spokes in this hub model will be read replicas and won't often participate in coming to quorum because the network latency is too long but may in case of some outage or some other issue. We don't know if we like that idea because that really makes it a single point of failure and the whole point of using a graph-like system is that you instead of having really disaster recovery full backup that's a read replica of your massive Oracle database you just have a bunch of nodes and if your data center A goes down then there's just a minor availability issue while there's an election that takes place and that takes maybe a second or two and then after there's a new leader you just continue on as if nothing else happened and if that notar comes back online then it catches up. So the topology we're pretty flexible on it we don't quite know what it's going to look like yet because we haven't really been able to take it into the real world until basically now that it's open source so we can have access to the tools that we need. I'm really looking forward to the test and I'm hoping that our suspicions and our research on our local machine proves to be true for doing a real distributed system. Are there any other questions? I see that there's a question from Jerome on the chat with regard to do you plan to have contracts able to get or post data from contracts to be outside of the blockchain? Overall we don't believe that the smart contract language should ever be able to do any I.O. because I.O. is inherently non-deterministic and by I.O. I mean touch the outside world we want it to be an equation, an mathematical equation effectively that just runs. The way to get data in and out would be to have a message that brings the data in or a message that brings the data out and this way you can be certain that when your state machine runs everyone had access to the same thing. We can support doing other things if we wanted to it's just a question of should we? For example we could support doing go having go just run in a Docker container for example you could just send a layer to the smother system the layer could be the message you could send it to the smother system and then it applies a layer in the runs and you can do whatever it wants but my issue within our view of that is that it doesn't really provide you an excuse to it with regard to what state that state machine is actually in because it's an inherently non-deterministic system. We can also support tokenization and other things relative to speaking trivially we just haven't had a use case for it yet and that's really the core of Gino is that it has support for a lot of other things it's just that for our internal space to go before it really only needs to be able to transfer money from account A to account B through you know umpteen middlemen in a transactional way. If there aren't any other questions then I'm going to hand it back over to our to the head of the meeting. Hello this is Suresh here. Hello? Yep. Yeah this is Suresh here I just wanted to understand one aspect of this new network can this network or does this network has an ability to talk to existing systems like you know databases let's say push and pull and can it accommodate any price feeds? Is it possible? Any what was elapsed after database? Any price feeds? Data. Yeah so the system you can integrate so we're seeing the the way that we're looking at it is that the consensus layer should be effectively a black box and anything that you want to run with the messages that the consensus layer comes to consensus around it shouldn't be aware of them at all. It shouldn't it's not something like epaxos or something else where you actually have to either make sure that the outputs are the same to make sure that every same machine can sync. We've talked about that previously but also that there shouldn't be any notion of let's say you know with epaxos you can sort of have very quick apps that allows unrelated things to be transacted in different orders on a global spectrum but they are causally consistent so you know if you're transferring money from A to B and from C to D you can run those out of one before the other or the other before the other one and doesn't really matter because it's causally consistent. We want just a strict temporal ordering we think it's easier to understand and implement and based on that we can integrate basically with anything and you could just take the consensus layer especially when we publish the paper on the enhanced Sengaro protocol you could just implement that anywhere else and just use this as a way of getting global ordering for anything you want. I would recommend using a deterministic state machine and that for Hopper we can add things in as need be but so long as we don't really want it doing IO. We really want the data being sent in as a message so that we can guarantee that if you're using Hopper that these state machines are already consistent states. The main worry is let's say you have some programs transfer funds from A to B and you want to get and you also have to do an exchange rate account for that and you have one note that runs in Japan and one note that runs in London and the note that in Japan takes longer to hit the source of data that you need than the one that runs in London. There's no guarantees and wrap on when the state machine actually steps through the program so you don't want the one in Japan getting different data than the one in London that is going to use as an input for the equation that it's going to do for the actual transfer. So that's one of the main reasons that we don't want it to be able to hit it from inside the state machine. Also for example just think of using random number, get random number. It's really something that you don't want in your state machine so that's why we're making our language if it doesn't explicitly have the scope. So the ability to hit the outside world to get data to your control flow is the equivalent to me is doing random number to decide who you give money to. It's a bad idea for a distributed state machine that you want to actually have fully replicated. But we could support it if we needed to we would just support it through a different language and it would be a different system that has very different guarantees. Does that answer your question? Yes, definitely. Thank you. So that does answer the price feed part of it and if at all I have to pull data from an existing database and put it into the system or query some data from the stated block or the single block and push it back into my database. Does this new protocol allows us to do that? So the Tango protocol could have that run somewhere if maybe it would just be a different messaging, it would just be a different type of message. Hopper would not do that because one of the things that Hopper can't do is actually the world. It just tries to run and it works or it doesn't. So we very much don't want that. The way that I would architect a solution like that is I would say that I would begin some, let's say I only wanted to transfer money from Alice the Bob if Sam had just tweeted about there was going to be a rain in Chicago or something ridiculous like that. I would have a system that would say I'm going to escrow Alice's money and then there's going to be a program that takes some cryptographically signed evidence that Sam has tweeted about rain in Chicago as evidence to actually finish the transaction file. That would be one way that I would do it so I would move things into an in-between state but all that the contract needs to execute needs to be given to it for it to execute. It can't go out and get it at one time. It either has it there or it doesn't. Okay. It's just that I want to understand because a Genesis block might have and the pre-existed blockchain notion should have some kind of state now that as we are starting the transaction that's fine but if at all we have to port existing systems on to this new protocol then what are the things I was interested in if the new protocol supports me in a manner wherein I can pull some data or state of a specific contract between multiple asset classes on to the system and that would be of great help. Yeah. So I mean the way that you would do it is you would make a, you know, that's your application. Your application is using a Juno cluster as a place for storage and application of data. You would have some application that's just pulling, pulling, pulling sees a change, puts that change into Juno and then Juno goes and runs it through Hopper and then whatever state occurs, occurs and that's how I would architect that system. You really don't want your smart contract language being able to go and query these things and look at them because it just opens you up to a whole host of issues that you could have and issues that you will have like non-determinism. But integrating it is an application level thing and you can sort of think of, you know, your application is going to integrate with some API that you build that runs inside of Hopper. But Hopper has to be given the things to do anything. It's a purely functional language. Okay, I got the, you know, direction and I really appreciate it and thank you for your interest in the, you know. Thank you for your question. Is there anything further? I'm good. Yeah, I need that. Thank you. So I think this has been great. I think, you know, David and William, I wonder, you know, so specifically what are you guys, you know, what are your thoughts? Have you given thought to, for instance, you know, what sort of the next steps would be? Are we looking at, you know, thoughts of integrating the consensus capabilities into the pluggable consensus in LBC or, you know, certainly Hopper, you know, I think that sounds to me like something that would probably naturally fit into the smart contract execution capabilities that we have at LBC. I think I'd like to understand and hear and I'm sure everyone else does as well. Your thoughts? I can start to answer that. I can start to answer that and David can pick up anything that I'd like to offer, whatever you want to add. I mean, right now we need to finish the enhancements to the tango protocol and then implement them and then test them in a distributed way. We're close. We got sort of our hand force to open source a little bit early, which is that we, for somehow, someone managed to find Gino and put it on Hacker News and it was front page yesterday and now I have more favorites of any repo that I've ever made and I didn't tell anyone it was discovered it, which I was quite surprised by. So for the next steps, I mean the first next step for, with regards to engineering team is we need to, when we're already in head-down mode, we have a hopeful production pilot that we're going to be putting out very soon with a lot of stuff. With regard to integration with the Consensus API, we need to look at the API. We've tried making a sort of general purpose API for Consensus support and we haven't had a lot of success with it. We've tried to make, we've tried to abstract the Consensus layer out of Gino so that we could try PBFT alongside Business Unwrap, alongside other things and we haven't had a lot of luck doing it in any really tight-saber good way in a non-hacky way. There's just a lot of devil in the details that you have to fix. So we need to look at that to see if that's a capacity. With regard to Hopper, we need a couple more months of his head-down work to really get the first version where we have all these awesome powers. All these awesome powers but they're not quite there yet. David, do you have any comments? Yeah, I guess I just add that to the extent that we've been discussing a two-stack solution where we've got something that we think we could get something out this year with and a much more flexible long-term smart contract based architecture for experimentation and evolution and things of that nature. For some specific types of use cases, again perhaps use cases that involve very high transaction volumes but we've felt that this is interesting technology that possibly could be pluggable into that more open, modular, extensible, strategic architecture we've been discussing as an option. So again, as we're building out our stack and we're looking for various use cases and this type of technology could offer solutions that may not work so well with some of the existing ones. So we wanted to put it out there let people take a look at it maybe in a couple of weeks when we're all together during our hackathon if someone, we may form a project where we would look to be able to plug this consensus model into that stack and demonstrate it that way, that's the sort of thing we're thinking. As we encounter specific use cases where we think this could be appropriate we wanted to make it available and see how it could fit in. Alright, yeah, I think that's sort of what I was thinking a lot was to make sure you guys were thinking similarly. So thank you very much. Okay, so we have a little bit less than half an hour left and we have a couple of agenda where my agenda though. So we have a project proposal template proposal from Vipin that has been held on the Wiki and Vipin if you're on if you want to begin to present and then I think we should have discussion about the template and whether or not from a TSC perspective think that it's underkill, overkill, just right, roll of porridge and then I think we have to have a little bit of a discussion as I mentioned earlier about the project lifecycle the need for something along the lines of what some of our sister boards have some sort of incubator process and a process for elevating to active and then a process for graduating from there into being designated as part of the core and so forth. So Vipin why don't you start with presenting your template proposal? I don't know your parents sort of show it I think we can get presenter presentation mode rather Yeah, I would I'm not going to present it because where I am I'm prevented from going to Google Docs so I cannot do that I cannot do that Todd maybe you could grab the link and present it Yep, sure thing I'll drop that into the chat window one second. Into the chat window or to the I was just going to if you just put it up in the either or give me the description. Anyway, I mean there is nothing much to really talk about there. We had discussed it in detail in the you know two meetings ago and I had gotten some feedback from various people and I had tried to incorporate it but basically it is just text you know slightly structured I think the next steps would be if there's something seriously lacking in it you know I need to know about it the other point is that what are we doing this for meaning to help people put together proposals so one of the one of the directions in which we could go is one have a template with actual text in it with those sections and then let use that as a skeleton for people to just put together their own proposals that's one way the other way is use a proposal builder software but I would incline in the short term at least to not to attempt to do that just just do the skeleton like I said you know it basically is awaiting your comments so as long as you can as long as you want to comment on it or bless it or say we're not interested in doing this let's let everybody do their own templates I mean their own proposals so that's where I am you know basically okay any thoughts or discussion I mean you know personally I was looking through this and I think you know there's a lot to be said for this I think you know some of the getting into the solution aspects of things at least initially struck me as being a little bit too prescriptive I think a lot of times you want to just sort of put forth a justification for going off and doing something and maybe you have a couple of ideas about how you might do that and you just want to sort of start you know working through that process itself with some experiments may not work out you know need to pivot and so forth but I do agree that you know we need to have a framework that we can you know present the technical steering that we can go through and view and discuss you know is this the direction we want to go what use cases isn't solving those kinds of things I do think are very valuable to sort of kick off and say a better let's go off and work on that and so that you know somebody who wants to lead it can say hey who's with me yeah I mean the whole idea was right in the beginning of the of the document it says it's not prescriptive it is just for you to at least think about all of these aspects because in the in the beginning it talks about that this document is not prescriptive in fact that is those are the very words used in there so all it all it's saying is like trying to put some kind of structure around your thoughts in terms of writing a proposal so that that is nothing but a list of items that you should at least have thought about if not if you're not addressing them that is fine but you know so it's something to guide your people through no I think like I said some of these things are good thoughts but I think they may be overkill in some cases I was looking through it some of the other some of the other approaches that some of our sister projects under the Williams Foundation do and you know a number sort of have you know a very rough template of you know here's the name of it and here's an abstract of it you know here's who we think is going to work on it and so forth and here's what we think we want to accomplish knit it out into you know into a wiki template that people easily fill out and add to a list to be considered and so I do think it's something like this worthwhile so I would I don't see a whole lot of comments I think maybe we need to give it another week for people to go through and comment and make suggestions I believe that Mike had a list of other you know examples of groups like Open Daylight I think I had that one open here some place yeah so Open Daylight people can see my screen you know they have a project life cycle and they have a template of a fake proposal that you know again it has many of the same features that Athens proposes but it's probably straightforward to fill out and then there's examples that you know others have have done you know where it's flushed out again it's not very difficult to you know you don't have to go overboard but it's just enough so the technical steering has enough need to review discuss and to agree or disagree to you know to kick off as an incubating project so I would encourage people to go through and review this proposal add comments in the marketing and let's be prepared for next week to discuss it in a little bit more depth any other comments, questions concerns so that brings me to the second part of this and that's the life cycle again I go back to the daylight, daylight had a pretty good here we go process in place for their project life cycle describe flowchart you know where you kick off with a proposal goes through review by the technical steering after a couple of weeks of time for people to still on it and then it either is or isn't granted entrance into incubation after incubation you know it's given enough time then you would bring forward a proposal to have it graduated that again would be reviewed by the technical steering approved if it was approved it would be put into the sort of the call it the active or mature projects in other words we've got this the point where the code is stable it does what it purports to do it's something that we can integrate into release and so we put it into mature bucket if you will and then I guess there's sort of two other aspects here I'm not sure that I would agree necessarily that something goes from core necessarily to top level but there's a notion in their project life cycle of having a top level project which is something that has sub projects underneath it and then there's something core and core essentially you know would be I think part of you know this is something that you know we're going to have to collaborate on with the board and so forth but eventually there will be some discussion around conformance and compliance and certification and so forth as to whether or not something can be granted hyper ledger and for that typically we would probably define certain set of the components and modules and so forth as part of the core and so you know those projects would get a little bit more scrutiny you know more you know those would be the ones that we focus on for cutting a release making sure those were in ship shape and other parts obviously could be included into a release but you know the focus of the group and the energy sort of the excess scrutiny that we put would be placed on things that are part of the core because those are the most critical aspects and the things that I wish would be better I'm sure so I would put forward that you know I wouldn't mind if we could have something along the lines of this particular process in place for our own group and that so we propose to the technical steering consideration again I mean I think we could you know we can probably noodle a little bit on this and think about you know the life cycle itself just a little bit again I don't know necessarily have the same the same flow but I'd like to propose this at least as an idea that the technical steering could start kicking around in their own heads and then we get next week be prepared to come back and let's see if we can hammer out something that we can adopt as our own life cycle process and I'd welcome anybody to chime in with some thoughts whether or not you're on the TSC nobody has any thoughts or everyone's on you Hey Chris this is Dave Vaughan so I mean this looks like a great start and like you said you know we want to have something that we can adopt our own so yeah I'm just kind of reading through this it makes sense logical and I think a good portion of this we could directly adopt and I don't see any problem with that that's kind of the way I was thinking of it other thoughts let's make I just like a little bit of time to look through this and think about it a little bit more but it's pretty generic so looks fine I mean that's essentially what I intended I think everybody should take a look at this actually I'll put the link in the chat so that it's recorded Hi Chris it's Richard here Sir I miss this because I was I was called away from my desk temporarily I was trying to get my head around what a hyperledger project or sub-project would be so would the the work you and DA are doing to prototype a much code-based would that be an example of a project and another one might be to I don't know maybe evaluate how the JP Morgan code could be used or would it be something completely different such as well I'm not quite sure I'm just trying to get a feel for what a project might be in this context so right now you're right we're working through a bit of an experiment project you know thinking about integrating the DEH and IBM code bases for instance right so that's an example I assume a thought exercise here just for a moment say okay so out of that we're able to sort of graduate you know whatever we come up with at the end of that now there are going to be a number of different components of that that could be independently managed as a project you know there's a consensus API and mechanism probably one or two default ones and then there's a smart contracts component of it there's a ledger component of it and so forth you know you could treat them all as one project but then you know as the work expands you know it becomes a little bit more easy to manage these things somewhat independently of one another especially from an open source perspective because you want to give people opportunity to sort of jump in and dig in at a place that they're choosing and so I would envisage that eventually the various components become projects and then that there would be other projects that would be proposed that would you know propose to augment and add new features you know add confidentiality add privacy add you know you know or refactor this for better performance or you know it's really it's the desire to start a piece of work it could be a boundless piece of work that's so to a particular component or it could be something that's cross component or it could be a new component itself but again it would be a piece of work that somebody would say hey who's with me and where somebody could be driving it potentially long-term again some of these would be features or functions that may you know remain on as something that would work on ad nauseam or you know that sort of after the project is over it just becomes part of it or subsumed into a higher level top level project. Does that make sense? It does sorry it took me five seconds to find my mouse and then get it over to the right screen completely so go to meeting I'm sorry yes it does thank you. Hey Chris I have a question so you used you I thought I heard you something like Hyperledger branded or something like that yeah I don't see anything like that in the flow in the description what what do you mean by that? Okay so if we take a look at various other projects that are out there let's take OpenStack or Cloud Foundry one of the aspects of initiatives like that is to be able to sort of manage the brand and so if you know going forward you want to be able to manage the brand and if people want to be able to use Hyperledger the brand in their in their products and offerings and say oh where you know the IBM Hyperledger or where you know the the JPMorgan Hyperledger whatever it might be typically in order to make sure that we have that the open source community to have a certain degree of control over that brand so that it doesn't become deluded by people claiming it without having any kind of you know without there being any kind of substance behind that branding they would typically sort of come up with a certification or you know conformance criteria that have to be met in order to use the brand that's what I'm really for now we don't have that yet right we don't have any code yet right but be my expectation going forward that the Hyperledger projects you know the government board would probably say yes we do want to be able to sort of manage our brand not have it deluded by factors in the market who are just using it you know using the name sort of in vain and so that there would be some sort of process put in place by the government board that would be they would ask the technical steering to sort of manage you know a test certification test with for instance that if you're going to claim that you're you know you're leveraging the Hyperledger code that you have to be able to demonstrate that you're exposing all the right API that you know in some cases it may be that you're using specific releases of the code and so forth but there would be some criteria that we could use and get measure as to whether or not somebody who is using the brand you know would have to substantiate that claim by being able to pass a set of tests so that's what I imagine that's a common use of Linux Foundation working groups and again I don't know if this is a question for Todd or I'll leave it to Mike I don't want to speak for Linux Foundation but you know a number of them do Mike? Mike if you're talking you're muted He's not in the mute Todd can you unmute him? Hello Can you hear me now? Do something like that it's a little bit early to discuss that here I think just because we don't even know what the structure of the project even looks like at this point but some of our projects there are companies or participants who want to say that they have solutions that work based on Hyperledger powered by Hyperledger type of statements or marks things like that again it's quite a bit early to have that discussion here right now but some of them do decide to go do that What kind of recourse do you have? Meaning if somebody did claim it without actually going through this branding process what what can be done about it? Well you have trademark rights in the name you can enforce the mark essentially what happens and generally the way it works is that the community comes up with a set of guidelines or you know requirements or what not that they can all agree on as sort of the basis for using the mark and you have trademark rights to enforce And who owns that trademark? It's owned by the Linux Foundation trademark is owned by the Linux Foundation but the group you know is the one who decides what they want to do with that Again it's entirely up to the group if some of our projects don't want to do that don't get involved in it and that's fine too I think it's a bit early to have that discussion here Yeah I was just putting that out saying that's what some groups do so I would actually expect that that would happen but as Mike said it's only a couple of days so I don't want to scare people of it okay let's see we also have sharing here desktop and so we have the ongoing work the white paper as I mentioned I promise to do that I'll get to that this afternoon and as I mentioned Mick and I were working on a code of conduct Mick I sent you and so I think we just need to sort of circle back with one another and maybe we can present something as a starting point for further discussion and and then there's the requirements and I apologize but I think I'm trying to remember I think it was possibly Accenture on the board call I mentioned you know they were trying to pull the other requirements group and I was looking for somebody to those cats you know maybe as a requirements work group chair and I think Accenture is potentially going to be posing that they might be interested in leading that but I'll get all open it up here if anybody is interested in helping to lead the work group to pull together the various use cases and requirements and you know let's get those documented and put into a position where we can present that to the TSC and we can agree on whether they're in or they're out or what their priority is and so forth so we get a better sense of exactly what the scope of what we're trying to build will be I think this is a very critical piece of the work I would really expect that at the face to face will be making part of considerable progress on that as well but it does need somebody to help for the cats Chris? This is Patrick Holmes at Intel I'm volunteering to do that I want to co-lead with somebody Awesome and you may get a co-lead tomorrow I think I have a call with somebody I think from Accenture tomorrow and so they may have somebody to help share some of that burden with you but I definitely appreciate the offer and so anybody who's again I think there are a number of names that people that express an interest in participating in the requirements of use case gathering and so you know maybe Patrick if you could just send a quick note to the list and invite others to get back with you and see if you can't form a group and get to work collecting and collating the various requirements in these cases that'd be super nice Okay So we're at the end of the job I want to thank everyone I want to thank JP Morgan for the in-depth review of Juno and Hopper and Dipin for his project proposal template So thanks again everyone and we'll talk to you all next week if not before we're in Slack and on the mailing list Thanks Chris Thanks Chris Thank you bye bye Thank you bye bye