 Many different roads to go down. Hyperledger is working, of course, with Enterprise and we have, James can speak a little bit to blockchain agnosticism and Justin, of course, to ETH2 and the General Ethereum Roadmap, but I'll give everybody a moment to introduce themselves beginning with Brian. Okay, I'm Brian Bellendorf. I'm Executive Director of Hyperledger which is a project embedded inside the Linux Foundation which has been around for about 16 years and we're all about trying to figure out how do you make open source projects that deal with plumbing with infrastructure sustainable by building not only communities of developers around it but communities of corporations that realize enough value, just enough value that they realize they need to feed back in and make this kind of flywheel go, whether it's operating systems or cloud computing or blockchain technology and that's kind of where Hyperledger comes from. Okay, James. My name is James Kresswitch. I'm a co-founder at SUMA. We work on cross-chain interoperability, particularly between Ethereum, Bitcoin and other chains that weren't designed for this. Typically, we're trying to hack in interoperability where it wasn't meant to go. I'm Justin Drake, a research IDFM Foundation. I mostly focus on Ethereum 2.0, all layers of the design and one of these is the light client and how friendly we are to others to be a light client. Okay, so to kick things off a little bit, I think it is important to understand where everybody is looking into the near future to then figure out what needs to be done for us to work a little more closely together at least to communicate with one another. So, to Brian, to educate folks here who may be more familiar with ETH2 but less familiar with where Hyperledger is headed, can you chat about the grand vision? Grand vision. So, it's four years ago, right? Ethereum was still kind of a roadmap, a plan, it was before the ICO. Bitcoin had arguably taken off and been interesting and enterprises were starting to pay attention to it but had a lot of concerns, right? Partly the regulatory bit, partly proof of work, partly like this sense of where is this technology coming from? And to a lot of us, it sounded very familiar, right? It sounded a little bit like 1998 when the term open source was kind of defined but even like the years before that where projects that were built by communities of developers working together actually is how we got the internet, how we got SMTP and DNS and HTTP and those kinds of things. And so, the Linux Foundation had started to hear from different parties, both big corporations but also companies like Blockstream and even Consensus and others that there was something interesting here, something very vital, something that built upon a history of distributed systems that goes back a couple of decades, right? But had a fresh take on it and it was worth trying to figure out where can we meet up between enterprises that have problems not so much with programmable money but partly with decentralization, you know? This move to the cloud and the move to kind of two or three providers is great if you're a shareholder in one of those two or three. We kind of lousy for every other enterprise, right? So where could we meet that up? And really it was started with that idea and a bunch of the companies that came together kind of pooled some projects they'd been working on, some code from IBM that they had called initially internally open blockchain and then it was called Fabric, code from Intel, that was kind of an R&D platform called Sawtooth and code from some other places like digital asset and it was kind of like, let's put this in a pot. Let's actually, because everyone's concerned about tokens and ICOs and things like that, let's kind of distance ourselves a bit from it, not from tokenization as an approach to kind of doing digital assets but from the proof of work kind of bit and from kind of the speculative financial instrument kind of bit of it and let's just see are there other use cases to apply this to from the way that banks send payments to each other to supply chain traceability to all sorts of things. And so that initial body of code didn't include anything in the Ethereum space but when I joined in May of 2016, the project was kind of launched 2015, May of 2016, one of my first trips abroad was to DevCon 2 in Shanghai and I'd been following the Ethereum space, I actually talked to Vitalik and Bo Shen when they did the road show for the original ICO and I was like, there's something important here, there's something urgent here and there's certainly this kind of enthusiasm from the community that I hadn't seen since the early days of the web. So I always felt like we needed to have some sort of relation, some sort of bridge to that community and while we built up this ecosystem around fabric and around sawtooth and saw that start to get used in production systems, I realized what we were also doing was educating the market, educating the enterprise world on what really blockchain technology could be. Even if we were having them take these kind of baby steps forward into just building partially distributed systems, some sort of minimum viable centralization was kind of built into all these networks. Now fabric is in a lot of different places, right? But it was still very clear that we couldn't just be about fabric, we couldn't just be about one technology, one core, it had to be about a family of technologies and so now in 2019, the kind of vision around hyperledger is, can we be that ground that explores that entire spectrum from permission to permissionless blockchain technology, whether it's down in the weeds at the protocol layer at the, you know, how do you forge these DLTs to what are the different smart contract engines there to the cryptography layers, cryptoscriptography, to all the way up to end user interfaces to the explorers, to the kind of dapps and those sorts of things and really focus it on how do we get all the enterprises in the world to adopt this and take this for granted the way that all of you take plumbing for granted, you know, that you turn the spigot and you get hot water, right? And that's a multi-pronged effort that's gonna involve a lot of things. Now though, we have a lot of new firepower from this community on our side with the Hyperledger Basie Project, which started just over a month ago, another project called Avalon that builds off of the Trusted Compute Framework from Enterprise Ethereum Alliance and it's really exciting and I think that for the next year our focus at Hyperledger is gonna be in kind of hybridizing and synthesizing all these different pieces we've brought together to take advantage of all the really smart people and all the really smart architectures that have been poured into this body of code. Gotcha, thank you very much. So to Justin, a bit of a brief overview. Serenity has been something that everybody's been waiting for since the names were sort of released with our Testnet Olympic and Frontier, Homestead Metropolis and Serenity. It's changed forms many times, but now we are just months away from phase zero. The design framework is very clear. Could you speak a little bit to what the grand vision looks like both for permissionless, for scalability, privacy preserving elements, phase two and beyond, a little bit about Serenity as an overview for those here who may not be as familiar. Okay, I guess in the context of interoperability, the big thing that we, the big thing in Serenity is sharding, right? And here is the idea that you have multiple blockchains and they're kind of homogeneous and so the idea is to try and, even though we have this asynchronous communication between the various shards for scalability, this we're trying to minimize the cost by having these fast cross links between them that share all the crypto, share the networking and share consensus. Yeah, I mean the other part in terms of interoperability for EVE-2 is to be a very good player in the ecosystem. Arguably EVE-1 is not friendly to interoperability like our proof of work is kind of hard to be verified on other blockchains and we've made all sorts of research to one, make the cost from a bandwidth perspective as small as possible, but also the computational cost as small as possible and that's one of the things that's unlocked through proof of stake which is the other big component of Serenity and just to summarize how our strategy here is basically to use committees. So we have this large number of validators, let's say a million validators, we have an honesty assumption, we assume for example that two thirds are honest or one half are honest, we sample using randomness, a much smaller committee and then we ask this committee, okay, can you attest to where the Ethereum system is at right now? And so it suffices for another blockchain to just verify these attestations and one of the big tools that we're using here is BLS aggregation and so if every attester is signing to the same message, you have this really nice aggregation which makes verifying these attestations extremely cheap. So we're at a point now where running an EF2 Lite client is probably cheaper, it's cheaper than running a Bitcoin Lite client. Gotcha. So now I'm going to kick to James. We see this kind of rounding out, right? So Hyperledger began as a, Brian said, minimally viable centralized nature, we're spread out over the spectrum to sort of see what works. With EF2 we have the grand vision, but of course still having a large ecosystem creating some subset of standards that work together regardless of where they're implemented. Can you speak a bit to the work that's been done that you've worked on through both SUMA ideas and development research that's taken place through TUSD and so on and so forth, sorry, TBTC. So where Hyperledger kind of explores the trade-off space and hasn't decided where it fits, it's trying to serve as many purposes as possible and help other people build. EF2 is trying to build this homogenous unified ecosystem. SUMA, rather than trying to make every chain the same, rather than trying to explore as much of the trade-off space as possible, we're trying to take all of the chains that exist that have users today and turn them into a unified heterogeneous ecosystem. So take Bitcoin and Ethereum and Cosmos and ensure that those can all interoperate together and provide tools to private chains like Hyperledger projects to interoperate with public chains efficiently. So we're kind of at a slightly different point in the trade-off space than EF2. We're trying to be everywhere, take everything that's different and bring it in so that it can all work together. I'm gonna follow up, sorry, go ahead, Jess, go ahead. I was just gonna follow up here. There's kind of a spectrum of friction and barriers. So there's like a single chain where basically the friction is kind of a synchronous you know, cross-contract communication that's the lowest barrier to entry, but still there is some friction and this is why we have standards like ERC20. And then kind of the next layer of friction is homogeneous sharding. And then the next layer of friction is, if you think for example of polka dots where they have a unified unification that the consensus layer, but they have non-homogeneous parachains. And then the next step is things like Cosmos and Sumer. I think that's fair to say. So this is actually a great transition here. The public projects have become to, have began to over time emulate each other a little bit more. And I think that this is somewhat true as well for some of the hyperledger projects. I don't believe it was intended. They've just seem to have found the same design framework as having generally represented a correct answer in some sense. Have you seen, you mentioned one large coming project in the space, Cosmos as well, polka dot Cosmos. Somebody spoke backstage to work that IOHK has done also in a similar area. And of course, East 2 is now working within a framework of homogeneous rather than heterogeneous, but still the same approach. So I shall run back to Brian with this to sort of round it out. What are enterprises looking for in, obviously looking back to the base question of why not use a server? There are certain things that blockchains offer, right? And a lot of that is the openness, the permissionlessness, it's just doing it with the same level of, the same feature set, same capability throughput. What are enterprises looking for specifically that permissionless systems haven't offered and do they see answers in these design frameworks? So Joe actually was here just before our session talking about his view that the world will have lots of different ledgers, some of them very public facing and some of them fairly private amongst collections of organizations, right? And that's for a number of reasons, some of them technical, some of them kind of business oriented. The technical reasons may fade over time. I mean, right now, I'd say there's an advantage to permissioned ledgers which actually can be public facing. For example, the Sovereign Foundation is using Hyperledger Indie to build a public facing but permissioned blockchain ledger around digital identity, right? And there's all sorts of things in Indie that are highly tuned to the pairwise identifiers approach that they take, that everything from the consensus mechanism to the data structure on disk to everything. So there's some performance characteristics that perhaps the F2 architecture, perhaps other architectures that arrive, close the gap on some of those from performance and other kinds of points of view. But I think there's still gonna be some business constraints. So for example, being able to say that all the nodes on this network agree to be bound by GDPR, right? Or agree to be bound by saying, hey, we're going to put text on this network that for the purpose of efficiency, we're going to encrypt minimally. And if you try to decrypt it or you try to discover some data that you're not supposed to have access to, then there's a legal constraint that kind of constrains you that you assign a contract and agree, I'm not going to divulge this data, right? Or other types of business arrangements that are actually better to express in human contract form between the parties around some common kind of governance entity than purely through smart contract mechanisms. Again, those might fade away as some of the ability to manage confidential data and things get better and better on the public ledgers. But there's a spectrum here, there's trade-offs. And I think we are going to see this heterogeneous world of unpermissioned public-facing and private-facing blockchains to two independent dimensions for the indefinite future. I don't know that that ever converges on one. So a question for both James and Justin. To James first, interoperability with private chains or permission chains, is it functionally viable? Is it a two-way road or are there constraints there? Typically the way interoperability works is that we have one chain checking the consensus process of another. So when you want to have Ethereum inspect Bitcoin transactions, you have to actually import Bitcoin headers into a smart contract. So obviously we can do that within a private chain validate a public chain's consensus. That's like trivially easy to do. The other way around, we can't really do with any sort of security. For a permissioned chain, the consensus mechanism is usually a federation of signers that can do whatever they want. And so validating that on a public chain has essentially the same security model as an oracle is we're importing information about this chain but we're just trusting the signers to do it. So this may be useful in some cases but I would suspect that we see it as a one-way road most of the time. Okay, I have the follow up for Justin was sort of similar. I don't know if it's a solution in any sense but I've heard the idea kicked around a little bit of using execution environments and this is far off in the future, I understand and that research is nowhere near done but there was an idea, maybe you could speak to a little bit about it, maybe slotting those in for some of these larger permission pieces. I think it was Vitalik mentioned in a panel not that long ago. The use of, I guess the question is about use of, potential use of execution environments to help bridge that gap. Any, could you speak, I guess first to execution environments themselves so those who are less familiar with them and too about their potential capability. Right, so execution environments are kind of a layer of abstraction which allows anyone to kind of graft their business logic on top of the consensus but in a much more low level way than a small contract. So you can even like think of it as programming your own VM and you don't have the overhead of the VM as a starting point. And I guess this could be helpful in the context of private chains that have a road map to become more and more public over time. And as you mentioned, maybe these private chains they don't want to have to deal with tokens and have to deal with permission, security in the permissionless model and instead they just want to reuse an existing working secure blockchain. And so one model here would be to just drop their existing consensus that they've tested in the private world and then drop it in to an execution engine or you could also drop it in as a polka dot power chain. That would kind of be a similar thing. But regardless, having some universal standard or at least one that is used widely seems to offer a great deal of benefits over time. You have ideas on things that might fit into a decent standard stack. We spoke about briefly, I hope I'm not preempting just commentary in future but could you outline a little bit some of which we spoke about not too long ago about what fits into that stack, those pieces being worked on now, elements of standards. I mean, if you're going to slot in to an execution engine or going to slot in as a polka dot power chain then you want to try and express your consensus in wasm because that's what these two systems will be doing. And it will also be helpful if you express the application layer itself in wasm because it will all be one system. I mean, if you want to do something more ambitious and have your own blockchain, your own consensus then it is helpful to try and converge on the blockchain standards that are emerging at a lower level. So at the networking level, at the cryptographic level so for example, at the cryptographic level you could use SHA-256 for hashing, a BLS-123-T1 for signatures, that's a great starting point. For networking, you can consider the P2P as a great starting point. If that's part of your more ambitious roadmap. So go ahead, James. The subject of using execution environments for permission systems, I think Brian made a really interesting distinction between public-facing and private-facing as an additional access to public and private or permission and un-permission, thanks. So using an execution environment to create a permissioned but public-facing system might be useful. It gives public verifiability without necessarily public participation. Right, and if you want to be public but also have privacy, it's in yet another dimension, then you need to start getting fancy with zero-knowledge proofs maybe. So I'm gonna come back to Brian for something a little more macro but related. So the innovation, and this might be something you disagree with to some extent, but a lot of the innovation is taking place in the public space. Especially, I mean all of this was born out of the open source, this of it. And it changes quickly, right? So we could speak to the research being done by EF researchers or around the Ethereum space to IBC or so on and so forth. I guess the question that I would ask is what specifically are some of the member organizations interested in and what could potentially be done to help connect them with those that may be building toward that future? For example, and a couple of quick examples, would it be if they're interested in privacy-preserving elements like ZK rollups so that they could protect their customer and consumer information while offering something on a public network with that level of security, would they be interested in funding research or lending developers? What are they interested in and how can they drive toward making it a reality? So in technology movements, there's kind of a boom bus cycle of ideas that get tried and then eventually as a space matures, things kind of whittle down, right? Into here's the architectures, the concepts that actually seem to provide the most value for the least amount of lines of code or the least amount of complexity, right? And I would argue in 2016, we didn't know what those are. I'd say I'd argue in 2019, we still haven't figured out what that final set is, right? And so innovation can't just be about, hey, we've come up with a new idea. It should be about how do we systematically try to map the solution space, give those who are passionate about these different ideas the free reign to kind of build their own kind of roadmaps, decide which parts of that problem they want to tackle first, which ones they're happy to say, here's a one dot oh, right? Here's something that now I'm comfortable with you using my software completely without my involvement to track digital assets. That's like a big kind of like threshold, right? But let's try a number of these experiments. Let's allow Darwin to kind of play, allow those that succeed to keep iterating and going forward. But the real question is how do we make sure to make sure that good ideas that are embedded inside of the projects that don't achieve that kind of critical mass don't just get lost or don't get kind of dismissed because they haven't have the wrong brand on them, the wrong license, the wrong set of characters around them. How do we use that as mulch for the other projects that do continue to go forward and move on? Which was at least our rationale inside of Hyperledger for having this kind of greenhouse metaphor for these different projects, right? And some of them have turned out very well, like Hyperledger Indy, which again, hyper focused on identity, has been able to like go and not have to bother with upgrading the entire world every time they want to try to invent some new topic or throw away an experiment that they did that didn't actually turn out well, right? And other projects like Hyperledger Quilt, which has been our implementation of the ILP standard, which just kind of failed to get anybody excited about it, right? I mean, I'm following closely stuff going on at IDC and with Cosmos and that sort of thing. I think that's great. But that was the experiment we tried and it's still alive. You can all go check it out still and use it, but you have to be able to like be willing to take these experiments at risk. And I see our role as being very much about provide that space, allow people to evolve but put these projects kind of on a conveyor belt to a one dot oh release and eventually something that conservative organizations, who don't like to upgrade every month, who don't want to necessarily worry about a hard fork forcing them to have to upgrade systems tomorrow, right? Or even at a pre-planned time, six months in advance, like they want to be a little bit more in control of the rate of change on their networks, right? How do we get them comfortable with the idea of using this technology when it's appropriate, when it's not appropriate, right? And kind of provide training reels for these organizations so they can eventually get caught up to the more forward leading organizations like those of you. Gotcha. I kick to Justin. The, coming off of this, there are, I've heard some talk of larger organizations in the public space that are replicating efforts, whether it be research, LAPDB, WASM research. So coming off of what Brian's spoken about and then flipping the question on its head, there's to some extent a million developers aside, limit on mind share of those that are able to develop at the core of the core, right? What would you find beneficial from, let's say both enterprises and other public blockchain ecosystems in regard to interoperability or collaboration, collaborative efforts, to help both be able to produce what they're looking for and to avoid replication of efforts? Right, so I mean, after this panel, I have a whole talk on collaboration. And there's many things we can do. We can try and focus on low-level components, which are very modular, things that are designed to be reused. We can try and, at the legal level, try and avoid things like bad licenses, even bad open source licenses. We can try and avoid patents. We can try and avoid things like trademarks and NDAs. We can also be open from the get-go, at the risk of being a bit more chaotic and messy, try and open up your doors so that we can all learn from the mistakes, because some teams, what they do is that they spend two years working on something and then they ship something perfect, but it's a lose-lose because they could have gotten feedback earlier on, and we could have learned from their failed experiments. It's an interesting thought. So, James, on that note, there are specific projects that enterprises have given to the public blockchain space that are open source, taking nightfalls, an example. Do you see them as, they are beneficial, and we thank them for their work, I'm sure. Are there elements of that that I guess we could work with more closely together to make sure that they're beneficial to the space, so they have upside, downside? I definitely think so. So, nightfall is an interesting example because it's a cryptographic system, and it's very difficult for non-cryptographers to interact with or to understand how to reuse components of. But I think, as far as the IBM blockchain and the blockchain and all of these huge enterprise blockchain processes, we haven't seen a lot of useful code fall out of it. We have people like the Enterprise Ethereum Alliance, Ethereum Enterprise Alliance, EIA, whatever it's called, working with enterprises, trying to serve their needs and ensure that they give back to the community. But I can't think of any enterprise-driven applications on chains that I've actually used. So, IBM has like 40 full-time engineers releasing every line of code that they write on fabric as soon as they write it. Okay, so that's great. And there are about 40% of the total IP going into fabric, so there's another bundle of people outside and other companies, Y-Way, Deloitte, others that go into fabric too, so. Perfect. So, I wanna catch up with you about fabric. We could do a better job telling you all about that. Yeah. So, digging back into technical interoperability, the work that's being done on the public side, I guess we'll speak to first. Justin's outlined a few of them. A few others have been brought up on stage, like IBC work, Privacy Preserving Elements. So, I'd first like to kick to Justin to hear a little bit about what those leading areas may be. And I'm going to then come back to Brian and figure out our different standards, similar standards being worked on in the enterprise space and how can they begin to share that work where even it's being open-sourced, apparently it's not being communicated to the best possible, you know, and of course, comes in blockchain. I know, it's a limited number of people. So, leading areas of potential technical and operable focus, what are we looking at as of now? I mean, one trend that I'm particularly excited and is leading the way forward in terms of like hands and interoperability is a cello and coda. So, they're going down the snog-based light clients and they're bringing in a very sophisticated machinery, you know, recursive snogs. One is like the fully recursive in the case of coda and then in the case of cello is like one level of recursion. And I think this is great in terms of minimizing the cost of interoperability because at the end of the day, you know, when you have a very flexible virtual machine like wasm, then in theory, you can be a light client for any other blockchain, but in practice, the limiting factor is just cost. And by cost, I mean the gas cost in transaction fees. And so, there really is a big engineering effort to be done to squeeze that cost down to almost nothing. And these projects are leading that. Another kind of interesting consequence, potentially, of the snogs approach is that it could be kind of a common language for all the various light clients to speak because every blockchain has a different consensus. And so, every time you build a bridge between two blockchains, that's like a custom ad hoc build bridge, which requires a lot of sophistication. But maybe if we can, you know, encode these algorithms in circuits and then abstract away all the complexity in a snark, that might be one way to move forward. Okay, and a quick, sorry, James, go ahead. I wanna push back a little bit on snarks as like a interoperability mechanism. The main thing that they're useful for is making updates of expensive things cheap. Unfortunately, when we're running a light client on chain, the amount of updates is still linear in the frequency of the reads. So if there's a lot of people reading the system, your snark-based system will converge back to just updating once per header rather than skipping a bunch of blocks. Right, so one of the additional benefits of snarks is that you can combine some of the application layer logic. And one of the interesting pieces of application logic, which actually is kind of between consensus and application is going to be the Merkle paths, the witnesses. So generally speaking, you have data on one chain. You have the route. The light client is responsible for moving the route over and then the user is responsible for moving the data and the witness. And in practice, the data is a fraction of the total cost in terms of data. You have the Merkle paths are just much bigger. And so there's this idea of using snarks for witness compression. You can take all the witnesses for all the data and compress that down to 200 bytes and then even include that in the route update. But yes, I agree that if you're going to do very frequent reads with low granularity, the overhead of the light client starts disappearing and then you have these other overheads but they can also be subsumed in the snark. Great, but we have to plan for the worst case scenario because combining a bunch of snarks into one, aggregating snarks is an interactive off-chain process. Right? Well, in the way that chains work right now, you have transactions in the mempool and then you have a block producer that will batch everything. And then that entity that's already there doing the batching can also go one step further and do a snarkification and compression. Yeah, can also or must also. Can in theory, but in practice, must if the cost starts too high? I find this helpful in whether it's done in a hallway or on stage for us to really press each other and break things down. So to Brian, it's a very similar question. Leading areas of research that you've been seeing in all different areas of hyperlegious focus is some of this beneficial that you might bring back and how might what's being worked on in camp be communicated back to the public sense. Some degree. I mean, so Project Avalon, which is just launched is a project with consensus, Kaleido, Microsoft, Intel, iExec, a couple of other companies are here to try to look at how do you manage and distribute? Let's see if I get the bullet point right. It's coming up with short descriptions, these projects is really hard, but it's how do you manage off-chain secure non-clave types of resources as a way to optimize and simplify certain transactions that otherwise would need to be a lot more expensive to perform around providing confidentiality to transactions and other types of things. So it's something that implements the trust to compute framework definition that the Enterprise Ethereum Alliance has been working on. There's a lot of interesting ideas there and it's still earlier days and it's still more of a raw R&D kind of thing. But I also wanna say a lot of what enterprises need right now is a system that provides immediate settlement around a common system of record with everybody being able to verify this is the right order of transactions, implementing prevention of double spend, being able to implement confirmation logic in the form of smart contracts or chain code, whatever they wanna call it. And through that you can solve a huge number of use cases that otherwise require parking a central server and having somebody run that and this is a way you can get rid of Swift, you could get rid of a lot of these counterparties that today require a lot of trust. Sometimes it's earned, sometimes it's not, right? Sometimes it's just mandated and this is what's driving a lot of the enterprise interest in blockchain technology today. So I kinda feel like sometimes a lot of the really thorny work and hard work that is being funded by the EF and that is being worked on by this community is going to matter quite a bit down the road with these enterprise blockchain networks when they have hundreds or thousands of participants operating exactly as peers, right? When they have a lot of sophistication, a lot of hostile actors on the networks that they have to try to algorithmically prevent. Today you can make up for a lot of that with forms of human governance that operate at smaller scale but still provide enough of that guarantee and frankly enough of this is still the right to fork since everybody has a copy of the ledger, if they disagree they can go their separate ways and that provides enough of a check on tyranny. I will say sometimes being too early is just as bad as being wrong. I've seen a lot of both internet startups as well as standards efforts that try to define a world and solve problems long before their world is ready for them. In the early 2000s there was a bunch of standards work around OAuth and open ID that tried to essentially say if we're gonna build a decentralized social network here's all the building blocks to do it and they're all relatively finished. It just turned out that Facebook took off and everybody went to that and there was both no funding as well as no momentum behind independent implementations of all these underlying technologies and so I think we just have to be careful about going too far down a rabbit hole that doesn't have enough end user validation and enough momentum to make sure that we're not solving problems that don't actually end up being problems by the time the world is ready for that software. Does that make sense? We have just a couple of minutes left so we've spoken over the last almost 40 minutes about different efforts on the research end for bringing this whole ecosystem together whether they are on the private end or on the public end. In the last couple of minutes I'm gonna ask a question that I'm personally curious about. We've seen things again in their design framework come much closer together over the last few years. I used to know 2015 or so what might this look like in four years, right? You figure we're probably there a little bit sooner than we are so on and so forth. What does hyper ledgers going down the line to close? What does, hopefully if you can about 50 seconds what does it look like in four or five years when these systems are developed? Is it all too different from the public space and what does the public space look like? My hope is that there's a handful of technologies that people can pick up and deploy without thinking twice that there's an API they can use and not worry too much about what's going on under the covers but if they need to it's open source they can drill in and figure out what's going on and fix things if they need. I do see us pretty much sticking to being a place where software gets built and leaving standards work up to folks like EEA or the EIP process or other groups kind of thinking about how do you get agreement between parties about kind of the protocol layer but when you want to come together and simply get better leverage on the precious time that it takes to put software together that's why working together in any open source community makes sense and that's what we want to keep doing for the next four years. Thank you. So what does sumo look like in four years? I think that depends a lot on what the whole ecosystem looks like in four years. At sumo we try to work on the chains with the most impact. Today that means Bitcoin and Ethereum, Cosmos and Polkadot are up and coming and there's a handful of other chains that are interesting. We've started working with private chains as well. We've worked a little bit with like Quorum like just playing around with it. I'm hopeful that within four years we'll have a better idea of what the space looks like and we'll be running multiple cross-chain relays and multiple ecosystems. Justin. Well, I'm hopeful that in four years we'll have figured out a scalable interoperability within the context of Ethereum and the homogeneous chance but also hopeful that we'll be working very closely with components that fit for example in the web-free framework. So decentralized storage at scale for example has a nice complement to decentralize the computation and data availability. Also looking forward to interacting with the most vibrant pockets of activity. It could be Polkadot, it could be Bitcoin for example and then using for example Cosmos as bridges. I'm just, it's very hard to predict the future but I'm quite excited regardless. I look forward to seeing where the vision goes maybe OpenSocial is not dead yet. Thank you all for participating and look forward to the rest of the day. Thanks so much.