 So welcome, everyone. So we have the following agenda items. And we'll go over those briefly. And then if there's any other items that people think that need added, let's discuss. So the first is going to be an update on the white paper working group from Dave. And Dave, you're on, right? Yes, I am. Yes, OK. And then we'll follow. And that'll be, I think, fairly brief, right, Dave? Just a sort of call to action. Absolutely. And then there's, I think, I'd like to see if we can't sort of close out on the code of conduct, Arno. I mean, we may not be able to, and maybe we have to wait until the face-to-face finally to get it resolved. But Arno will be the discussion on the code of conduct and where we stand there. And then I'll kick off. But I think Tamash and the DAH team may also have some content for proposed discussion of some of the top-level stories that we would work on at the face-to-face. And again, we can get into if people think there are other ideas that they may have, that they'd like to pursue as experiments for next week. We can certainly discuss that. And then we probably should make sure that we have a discussion on the full agenda for the face-to-face. Although I kind of expect that we'll have different tracks to be a development track. Pardon me. There'll probably be a requirements and use cases working group track and then a white paper track. If we need one, it gets a code of conduct track on it. Then Steve Westmoreland is going to get on and talk about the support services that the Linux Foundation can provide us from a continuous integration and this from an IT perspective. And so I'm looking forward to hearing about that. And then we'll have, if we have time, an update from Patrick on the requirements working group although he may not have been expecting this. But if we have time, we'll just get an update on where things stand there. Anything else? OK, thanks, Chris. So I just wanted to talk about this a little bit. So yes, I volunteered as to be the editor of the white paper. And of course, the editor is not the author. But when I first signed up for this, I was like, oh, yeah. We've all reviewed the OBC white paper. And as an editor, I could go in and do a global replace of open blockchain with Hyperledger. But I guess that isn't exactly appropriate. And Chris and I talked about this a little bit before. And Chris laid out a nice sort of an outline for the white paper. And so to the extent that we're all getting together next week, and as Chris just mentioned, we're going to be organized into some kind of tracks. And it certainly makes sense to have a track on the white paper to make sure it reflects what we all agree is a little bit more of a detail of what our charter admission is for this project. So I'll be looking to kind of lead that. We're definitely going to need some people to participate. And also, it's important to also recognize that this white paper, and if you read through the, and I definitely encourage everyone to please read through the IBM one, because that's the one that has been submitted. And if anyone would like to have another version to be submitted as well, then that would be a great time to get it out there. But it does cover things like use cases, industry use cases, and featured requirements before it gets into the architecture bit. And to the extent that we're already organizing a bit around having a working group track on use cases and requirements, it's important that at least at a high level we're capturing the heart of what we're looking to do within the white paper. So this is in the standalone stream by any means, right? It has to, it should almost be coming out of the output from the use case and requirements working group. So but it's not going to have the same level of detail. So again, I think the key thing is to make sure that what's reflected in here that we've, I imagine I should probably be sitting in on the requirements and use cases, discussions as well. And then having members of those work streams participating in the review of the white paper and some of the edits. So those are just a couple of things that we'll be focusing on next week. And I think that's about it. It's a bit of a call for volunteers to help participate in the actual offering. And then ensuring that, again, just making sure everyone kind of reads through this because I think the open blockchain white paper is going to be our template. And I think it has done a pretty good job of articulating how this project kind of differs from some of the other projects out there and filling in the gaps specifically around scalability, confidentiality, privacy, and then the fact that for enterprise type of use cases, many of them are going to be operating in the sort of regulated industries. So there are some real requirements around that. Those are, I think, the big three things that we want to make sure are well captured within the white paper so people can read through this and understand a little bit better. So I mean, that's just pretty much all I wanted to say at this stage. We will be forming a breakout group or a track around this. And we need to make sure that what is coming out of the use cases and requirements discussions are, at least at a high level, are captured within the white paper. Hey, Dave, this is Mike. How do you want feedback on that? We had sent around some feedback on IBM white paper earlier. Do you want that incorporated into a Google Docs comments or what would be the best way to provide some of that? Yeah, I think Google Docs is, since we're using that for pretty much everything else, I think that would work here, unless anyone has any other recommendations. Yeah, that would be my recommendation is to capture the comments. It has comments on the Google Doc. And if you have suggested that, it's obviously, I would say, go for it. Dave, I mean, you're editing, right? So I mean, whatever tends to work best. I think if it becomes too crazy, then you may want to sort of keep it that you're the one that actually does the actual editing and people put in comments and you accept them and reject them and tweak them and so forth. But it can also work if there's not too many people stepping on each other to have people just suggest that. Yeah, I think that would be a good way to start off. And then suggest the edits. I'll put them in there. If I think there's something that needs to be a broader discussion around that particular suggestion, then we'll just, again, I think we can table some of that to the face-to-face next week, so that will be easier to discuss in a form. But getting started with the Google Docs and the comments in there, and I think that would be the right way to go. So there was a question Charu asked in the chat how to get access to the white paper. It's in the IBM proposed contribution. And so I will send a link to the mailing list. But the actual white paper is a pull request currently, the link that Mike pasted in the chat. That's the outline. I don't have to do that. Thanks, Dave. Any questions or comments? Transition to R&O. Yes, hi. Hello, everyone. So call of conduct with the survey monkeys going on. Initially, it was a really close race. And then things settled down a little bit. I think at this point, there are 64 votes that were cast. And it seems to indicate a preference for the directory C-based draft that I put together. It's not a landslide by any means, but there are two indicators that tell me we should go with the directory C-1, because there is a majority of people who think they prefer the directory C. But maybe more importantly, there are more people who do not want the Cloud Foundry-based one. There are five people who said they disagree with using that, while only one person disagreed with using the directory C-based call of conduct. So based on that, and you probably saw, I sent a couple of emails asking for explanation about why people disagreed. I thought that could be useful. We may be able to address some of this, but I didn't really get any response. So I'm a bit of in the dark as to why people feel so strongly that they disagree with a particular option. But in any case, I think at this point, as it's been said, the goal is not to make this a whole project in and of itself. It's just to have a cut of conduct that we can use and move on. And so I would suggest that TSC decides to adopt the directory C-based draft, at least as a starting point. There may be a couple of things that could be added to the cut of conduct. There is an island that we put a point on staying focused that's staying on topic, that is in the Cloud Foundry one, which we could leverage and put in the directory C-based one. And there is another one that I had pointed out last week, which is the step down considerably. And it has to do with telling people, if you leave the project, just don't drop the ball and disappear, try to make sure that things get transitioned properly and that somebody is going to be able to take over. And so this was another aspect that I have seen in the Cloud Foundry-based one that is not in the directory C-1. So if we adopt the directory C-based draft, I think these are two points we could possibly add to this to make it maybe the best merge kind of thing that we could all live with. So I don't know if there's any questions other than that. So thanks, I don't know. I think that analysis makes a lot of sense, certainly to me. I can't remember it out, Todd, but we have a course, right? So maybe if we could just quickly pull the TSC members and get their input and maybe at least an agreement that, OK, so the W3C is our base. And maybe sort of suggest to R&O that he pursues the couple of thoughts that people had in the comment. And actually, I think, I was hoping we could get it settled by today, but maybe if it just needs one more edit pass, if we could just sort of say, OK, let's just focus on the W3C-1 based on the fact that there are more people who disagree with the Cloud Foundry base and also more people who prefer the W3C-1, that maybe it makes sense to consolidate our efforts around just finalizing over the course of the next week. So to try to expedite this, if I may, my suggestion would be that we put a proposal before the TSC today to adopt the W3C-based draft amended with two additions that I just talked about, the stay on topic aspect and the step down considerably. If people can agree to this, I can make the edit and we're done. That'd be cool. I think so too. So Todd, you want to do a roll call and just sort of get the TSC members to weigh in? Yep, sure thing. So for the TSC members on the call, I'll just go in order. So Stan from CME Group. I'll agree with the W3C. All right, sounds good. Stefan from Deutsche Bors. Stefan, you may be on mute. All right, we can come back to Stefan in a second. Hart from Fujitsu to Hart as well. Chris, great. Mick from Intel. Yeah. All right, sounds good. Dave from JPMorgan. Yeah. All right, Richard from R3. I'm staying. I've got no strong view either way at all. So I'm happy not to cast a vote. All right, sounds good. And then Stefan or Hart, if you're talking, you're on mute. Or if you want to type in the chat window as well. Yeah, I'm not certain either. And then just checking of the other four TSC members is Oshima-san, Parda, Tamas, or Emmanuel on the call. So Parda isn't there. But this is Morali from DDCC. Does he have to cast the vote? Or can we do on his behalf? How does that work? Maybe you could unmute. Hart, he's trying to speak or trying to unmute. Yeah, Parda would need to cast the vote or would have needed to provide Proxy before the call. And then it looks like Hart is on mute. So Hart, let me see if I can unmute you. Hart, are you there? Hey, I'm sorry about that. I'm fine with the W3C code of conduct. I thought the definitions were a little bit much, but it's just a code of conduct. We're not writing the constitution here. So I'm fine with going ahead with that. All right, thank you. Hi, Stefan here. I seem to be unmuted now. I'm fine as well. All right, sounds good. So Chris, from the TSC members on the call, we have seven TSC members. Six of them are in favor and one abstaining. Let's go with that. I mean, we can submit a full request. I say, Arno, go for it. Thank you. So we have a COP. Arno, if you could edit that, make those edits and then Todd, I guess, we should figure out how to move that in. So the technical face to face, I send around a deck that hopefully captures, actually, Todd, you're going to present it. Yeah, it should be up now. I send this out to the list. Again, apparently, the list does not archive its attachment. So I also posted it to the Slack deck without there. So we really need to figure out a consistent way that we can just post. I don't know if we can use that box or not. Maybe Steve can talk about that. Here's my proposal. We would start the morning with an introduction to the various co-basers, DAH and IBM based on the proposal that we're going to be working on as an experiment. Then we have a little bit of a show and tell, and I'll talk a little bit about it in a second about that. Or maybe she and if you're wrong, you could do that. And then we've outlined some potential projects. Now, Tamash, I was hoping was going to be joining us this morning, because he also has maybe a little bit more specificity in terms of some of these projects. Maybe you could just do a voiceover of what the DAH proposal has suggested. And then to do that, I outlined a potential project at the face-to-face itself, where we would get into a little bit more specificity. And people could start thinking about what they wanted to work on, which are those sort of sub-projects that they thought they might want to do. Then I thought it would be good we can have a brown bag to make sure that everybody can get their development environment set up. So we would go through the process of installing all the bits and pieces of the virtual box and so forth. And actually getting people's development environment set up and running so that when they do changes, they can run the test and so forth. And then we would start at 1 PM and go to the rest of the day and through Thursday at the end of the day with the hack upon itself. And again, this is just the sort of the development track, if you will. Then we'll have separate tracks for the requirements and white paper. If we want, maybe we just have one track for that, and we can split some of the time between the two different parts of the group. And then I thought what we could do on Friday would be then to have a retrospective about what people thought of the week. This is typical of any kind of good agile process is to hold a retrospective. And so the morning, we would hold a retrospective and then we'd start thinking about where we go from here. And so that's the very high level agenda I had for the development track. Any thoughts on this before I get into this, before I have Sheen get into the specific? Hey, Chris. This is Morali from DDCC. Had a quick question. So in terms of the Dev environment, just want to make sure, because these are company-provided laptops, is Windows fine, or are we expecting on Mac or Ubuntu? We actually talked about that. So I think all of our instructions are oriented around a Mac. This is Sheen, Chris. Yeah, Windows we find. Yeah, Windows or Linux. We use a vagrant. So basically, Windows, Linux, Mac should all work, and we've all tested them. So we could be able to set up an environment easily on any of those systems. So we'll be working inside the VM itself, right? Of the vagrant. Correct. Yes. Yeah, it's all in a VM. Is there a preference though? It looks like Mac is preferred. Is that right? And we have developers using it on both Mac and Windows kind of equally. So either it should be OK in one developer or in Linux. So what I would suggest, and actually, I think we need to get and send out a reminder of how to set up a development environment. Maybe we could just send that to the list. But I think it'd be valuable to have people at least give it a go sometime between now and Tuesday so that we can spend Tuesday noon at the brown bag of just helping people through the last bit. And again, if you have a problem, share it on the mailing list or in Slack, and hopefully we can work through some of these things, just be email or Slack. But I think that the noon again would be let's make sure we have everybody who needs to be set up for development to be set up. OK, Shia, do you want to get into some of the specifics of the show and tell them and the potential projects? Yeah, sure. So just a brief introduction. This is the first time I'm on this call. I'm Shia Anderson. I'm an engineer at IBM, and I work with Ben, and he had the drop off song going through this forum. So the first thing we'd like to do is a show and tell for some of the OBC components. And the goal of this is to get people started on their hackathon projects, just answering probably some of the most common questions people are going to have. So the first one here is OBC has two APIs. There's a REST API, which is very straightforward. And there's a GRPC API. And well, that's straightforward also. Sometimes if you've never used Google protocol buffers before, it can be a bit complex to compile these and just get set up in your environment with them. So we're going to have a little Java sample application that will leverage the GRPC API. We'll make it available somewhere on GitHub and show everyone how you can easily communicate with the API from the Java application. The second one, we're going to show a Bitcoin example. And what that will be is we're going to have the Java application communicating via GRPC that reads the Bitcoin blockchain. It will pass transactions over the GRPC protocol to the chain code. The chain code will process the transactions using a C library and store them in a UTXO format. And the main goal of this is to show kind of how a C library could be integrated into the chain code because we know we'd like to integrate with a lot of the work that DAH and Blockstream have done. So they'll be able to help us get their libraries in. But the goal is just to figure out how to do this with a generic C library that we chose. And the third show and tell we will do is data warehousing via events. So we know that everyone has a lot of APIs. They'd like to be available. And some of these APIs can be complex, such as scanning transactions. And our proposal for implementing those more complex queries is that we have an event mechanism that sends out blocks. And we store these in a data warehouse. The example we'll show will be SQL. But obviously, someone could build an additional data warehouse if they wanted. And then we'll kind of figure out how we can do complex transact queries against those blocks and transactions and get a sense of what APIs everyone is looking for. Do you have anything to add to the show and tell, Chris? Or is that it? All right, next. I'm muted. Sorry. I think the only thing that I would add is that this is based on some of the work that we've been doing to experimenting with to actually start implementing some of what the joint DA should not be a proposal had laid out. So this is sort of giving the context as you described. But it's also sort of showing the path to actually fulfilling the objective part being the proposal, right? Yeah. OK, next slide. So these are some of the potential community projects we'd like to consider working on. You know, obviously, we're not limited to these, but it's just kind of an idea for getting started in things we believe we need. So the first one is plugging in the DH and BlockStream interpreters. The C library we used was just kind of the lib consensus from Bitcoin. Obviously, it would be nice to get experts from DH and BlockStream to figure out how to get their transaction interpreters into chain code. The second is making sure that our GRPC APIs and this kind of goes with number three, the data warehouse APIs provide all the APIs that are needed. DH has a pretty extensive list of APIs that they require for their applications. So we'd like to ensure that either we support all of those or build out anything that may be missing. The fourth one is we have pretty extensive unit tests and behave tests. If you've never used behave before, it's a kind of Python tool where you can write feature tests in just natural English language. So it's kind of a great way to get started and learn what is invoking and querying a chain code look like. Even if you may not know Go or Java or anything, you can just see the basics through these test cases. Number five is sample chain codes. These are just a couple of examples, asset management and provenance. I guess we'd like to help people get started writing chain codes and see what everyone is looking for and if some new requirements come out of that. And the sixth one, I was thinking, may be some SDKs around the REST API or GRPC APIs if anyone's interested in supporting SDKs for additional languages. Could be an interesting project. Next slide. So this is just kind of going into each of these in detail, some of which I already covered. So I'll go through these quickly. So this is the script interpreter. So like I said, we're starting with kind of the Bitcoin interpreter. That's what we've been working with the previous couple weeks. And DAH and Blockstream have much more advanced interpreters with additional features that we'd like to use. And then hopefully we can use this work to plug in further interpreters that people may be interested in, such as someone mentions Solidity for Ethereum here. Next slide. So the GRPC API, today our main exercise of it is the CLI, the command line interface for OBC peer. And we'd like to explore applications written there using the GRPC API to make sure that it has all the features that are needed. And the next slide is the Data Warehouse. So again, this is for the more complex queries that we'll have. We'd like the ledger within OBC to mainly be an OLTP database and that all these very complex queries will go out to the Data Warehouse. And I think this is going to be an exercise in both kind of building the requirements for the API that we need and then figuring out how to store the data so that we can perform the correct queries against it. Next slide. Yeah, and this is finally the sample chain code. So I think we'll have some basic ones that people can look at to get started. We'll have some basic tutorials that will show you how to write a very basic asset transfer one. I think the most interesting piece here will be new ideas that people bring to the table of what kind of chain codes they need and making sure that we can build those without issue. I think that's the last slide. So are there any questions about this? Okay, and I know DH also published a GitHub wiki with some goals for the hackathon. I think Tamash is on the call now and maybe he could discuss those. Yes, excuse me for joining late. I missed to recognize that you changed the time zone over the weekend. So yes, in OR project, the AHRP candidate, you find the new file called hackathlom.md and this is basically a description of concrete development goals that we would want to achieve during the hackathon. They are pretty much in sync to what Sheehan enumerated. Basically, we would want to see a working network that people can again play with a working network that is able to process blocks and stores them into the blockchain of their individual storages and builds a consensus probably with a default algorithm that is in OBC. Then we would like to see a structural block API, blockchain API. This API is quite simplistic even though in this document it might look like lots of functions but they are very simple functions like get a block, get a transaction and similar. They are the very basic functions that all or higher level stack including or complete use cases build on. So if the Hyperledger would support this API, we could basically move on to that stack a huge amount of further work. I think that this API could be a value for anybody else looking into this project since this is an API that is proved to be sufficient to create at least use cases or satisfy use cases of the financial industry. A further task would be to create a block validator, a block validator plug-in in OBC because all inheritance from Bitcoin requires not only transactional level validation but also block level validation. If I say validation, think of chain code execution. In our UTX model, use chain code to validate a transaction. We do not modify the word state. It is immutable. For us, the blockchain is just an app-and-only log that is immutable and that doesn't have any side effects, doesn't accumulate a word state. And the next one would be to plug in the interpreters of the blockchain interpreter for Bitcoin-like transaction processor. There are a few tasks also animated in this document. I think it would be really fantastic if we would start this work by introducing the stack also or a candidate introducing you to experts of parts of this code base and thereafter build teams and actually do the work, do a sprint during the second long, addressing these tasks. I imagine that people would want to have, want to pursue other interests during this face-to-face meeting. So, therefore, I think that we should have side tracks for discussions of requirements and use cases and eventually administrative issues of the project. But the primary focus of this gathering should be to get some work done because nothing will make this project more successful than producing useful code. I think that's what I want to say about it. Thank you. So, any questions, any comments? Any additional thoughts about what Tomas was suggesting? Are there other experiments or projects that people would like to get started on next week? Sounds like there's maybe a good set there. Is there going to be any merging of the code next week at all or we're just going to work on the code independently? So, I think that the intention here is to drive this experiment and hopefully to come up with, okay, so we're confident that this is something we can go forward with. Unless anybody disagrees with that. I'd like to see if we can actually to March to others. This is Shin, yes. Merging of the code, essentially. Some of the major merging, for example, will be, you know, merging the OBC with the interpreters from the DAH and BlockStream. So, at the end of the week, we're going to have a hyperledger repository and a code base that from which we can all now be doing pull requests and suggesting new updates and projects, but this will be the goal at the end of the week is, okay, here it is, it's there, and we've established it and that's what we're going to be building on moving forward. That's correct, right? That's the ideal outcome, I think. Any last questions, I think? Sounds like we have a good solid agenda for next week. What I'll do then is I'll put out this and I'll incorporate, you know, the tracks for the requirements and the white paper, and we'll get that circulated or posted someplace. Next up is a discussion about the services that the Linux Foundation can provide us from an IT perspective that would include continuous integration and so forth. So, Steve, I think you're on. Well, folks, thanks for having me. The Linux Foundation IT department basically provides a host of services to projects. As you guys, all of you are familiar with Linux, Kern.org. We provide the infrastructure and the security surrounding that and its distribution worldwide. We also do several large projects associated with Linux Foundation, some of which you may work with now. OPNFD, Open Daylight, more recently announced open container, open data platform initiatives. At its core, our services are basically designed to be web services, what functionality is to be in a web mailing list, Wiki, those type of functions. We have some flexibility as to which products get used on that, but we do typically have recommendations on those that scale up. On the CI side, we have some quite large CI infrastructure pieces that we support. It is typically a Github-Garret arrangement for the code and the code review. We support both Travis CI and Jenkins CI functionality. We support a small number of bug management, bug tracking functionality. JIRA is probably one of the most prevalent on that part. Some of the configurations are very complex in that they need to use multiple advanced networking functionality or high computer, high backplane processing functionality. In those configurations, we dynamically spin up test systems that use no tools and some of the other CI infrastructure pieces. How we engage is your project gets on the ground. We engage with the TSC to define that technology stack, make sure we understand exactly what's needed when. We do have a process where we try to go the most cost-effective way. That doesn't mean that we don't establish service levels. It just means that in cases where we need to spin up dynamically services, we make use of the best resource we can get all done, whether it's GC or AWS or that sort of thing. Typically, the core portions of the CI are on some hosted managed infrastructure just for stability and security control purposes. Key to that is in projects this large, there's usually a dedicated release engineer. I call it release engineer instead of release manager because this person's job is to make sure each and every hour of each and every day the CI infrastructure works. They often are in-chamber working with the developers themselves if they get a hang of. The goal is to smooth that out and slick it out as quick as possible. They're always tuning the CI infrastructure to make sure it performs as effectively as it can. Whether or not the CI infrastructure is going to require 30, 40, 50 VMs to be functional or whether it's going to need 100 to 500 VMs to accomplish its goals in the right period of time. Those are all things we factor into that sizing. Another component of that is we're very familiar with running multiple time zone configurations. It's not uncommon for us to have a presence in North America and their presence in Hong Kong or Asia Pacific. In those cases where things need to be married and made in China, we have access to services out of Beijing. Working in Europe is easily done. We typically do services out of either Frankfurt or London. So that's the, I guess the short list of the types of services we provide. We've done some very ordinary estimates based on projects that we think are similar in size to you guys, just for budgeting purposes. But I think we've reached a point where we just need to start talking specifics of what we think the volume of the number of developers accessing the system and specifically what tools need to be used. Are you lost in it? Actually starting to integrate Travis's CI. So you said Travis and Jenkins was the other, do you manage your own Jenkins or do you have any other services? Usually it's typical for us to manage our own instance and any configuration of those per project. I'm assuming, and you guys correct me if I'm wrong, I'm assuming there'll be a fairly high degree of security associated with this project where we don't typically recommend or have situations where developers have root access to the CI infrastructure. That's usually done by our administrators using very secure two-pack authentication, credentialized, secure linkages into it. But for projects like this, projects that there may be some concern that everything is on the up and up, so to speak, there's usually a mental firewall, if you will, between the administrators and the developers in the infrastructure pieces. Given sort of that aspect, and I think one of the things that most of us would hope and expect out of this is that this is something that's very robust and very secure, would that then suggest that we should be looking at something that's completely managed by the LF as opposed to something like Travis? And again, I'm just sort of trying to understand from your perspective and from your experience what the recommendation might be from the LF in terms of the tool chain that we choose to use. I would like to learn a little bit more about what you expect from your development community. What we've seen in the past is development communities typically improve around certain tools. So if it turns out that 80% of your developers, your committers are familiar with those tool sets, we'll certainly learn that way. We're confident in securing either Jenkins configuration or perhaps configuration that's something we're very familiar with. So we certainly will have recommendations around it, but at this stage, if you choose either of those two, I think we'd be completely fine. If there's a question, I'll walk down as it needs to be. My default settings are pretty... I don't want to say direct coming in, that's probably a step too far, but by default we tend to lock them down pretty hard and then loosen them up as directed by the TSC instead of starting off in a more open configuration. Do you folks expect a need for geographical diversity on the project, or do you think hosting out of North America would be exclusive? I think it's safe to say that we're pretty global. My impression from just the attendees of this session is that so... members from Europe and Asia and the states and Canada I don't expect that to change. There's an awful lot of interest around the globe in this and a lot of interest in participating. So at least initially we work for a North America, EU, Asia presence in the mirrored or slave repositories. I think the other expectation would be what I think most open source would be that somebody does a pull request that triggers a build in a school district for 13 years. I'm still not the one from the state I moved away from two years ago. That's even worse. My form was down. So the expectation that it would fire off a build and a set of smoke tests and then maybe a full set of integration tests would be helpful. Before people did reviews I heard you mention Garrett. Garrett is one of those things that people either love or they hate. We haven't had that discussion yet. I think it will be interesting to see how this plays out. I'm not sure that I necessarily have a preference and so it will be interesting to get people's feedback. Not only from the members of the TSC but anybody in the community whether or not Garrett is something they are familiar with and comfortable with and so forth. The nice thing about it is it does provide an effective way of managing the need for reviews. I think that Garrett doesn't do very well and gets great about showing who is doing pull requests and so forth and who is doing the commits but it doesn't really show us who is doing the reviews and tracking all those things and it's not as effective when you submit a pull request and then somebody says you need to change it. Garrett can do a little bit better job there but then again like I said it's also something that some people just play and paint. I'll admit to a personal bias since I've been following all things back to the comments after the news. I think some type of code review whether you choose Garrett or run a data tool. Garrett is probably one of the most common and it certainly provides you a somewhat bullet proof way of tracking and tracing back code reviews and comments. I think to some extent this project will have some of the same perceived secure requirements like maybe the Let's Encrypt project has and they found that that's been very helpful and God forbid you had an audit or anybody wanted to go to some type of audit template. I probably didn't mention it before but I will mention it that for all of our code bases we do replicate those into a different site. That's part of the full feature of the service so that functionally all of the code is replicated out to a if it's not already and we're talking about already having regional distributions but if it's not already it is archived off and backed up surcap automatically it's just part of the service functionality. Hopefully I've given you guys enough to kind of think through and chew on. What we can do is we can provide some kind of graphical reviews for the TSC to use in the discussions maybe you guys want to do a poll and get some feedback. The only other question I would ask you at this stage of the game is is there any requirements for testing and I'm going to use the word certification but I don't mean it as in you get a physical certificate from somebody but testing on bare metal there are other projects we have particularly the networking projects and the high security requirement projects almost always have a situation where they need to test on bare metal to make sure that the throughputs and that the computational values are all still the same. So I mentioned that just in case it is an issue it's something we certainly do now with many of our networking projects but I have no idea one way or the other whether it will be something that you guys would require into a need. I can say that certainly from an IBM perspective there will be interest in and I suspect also from other vendors as well there will be interest in running on different platforms so whether it's Power or Z or X86 or GPU I think there may be interest in ensuring that it builds and compiles and runs on all those various platforms so again from a consideration perspective I think you probably have the same thing with Linux where you probably build it out and test it on multiple different platforms there's two main thoughts about it one thought is build it and do a very aggressive CI testing unit test make sure it all hangs together set of testing and then push out the release and then let the folks that are the early receivers of the release do that kind of testing see it a lot in kernel or kernel derivative kind of products testing or there's a lot of specific and big adjustment that goes on outside in networking it's a bit different it ends up happening in both ways where in some cases the functionality is so critical that either a bender manufacturer contributes equipment into the base or through the TSC directs the purchase of certain equipment that's tested on an ongoing basis that allows the frequency of the testing to go up significantly and allows the establishment of a baseline of how this is running against if you will a bold copy of the equipment I don't know enough about the profile of the Hyperledger project and whether that's a value or not but I mention it to you just in the sake of knowing that that's a capability that we possess and it's a capability we've executed effectively to date nobody ever wants to have physical equipment nobody ever wants to be the guy that ends up having a supportable physical equipment we do do it now so if it's advantageous for your project and it will help the documents in the community then it's just another tool that will make available TSC to help your success I'm neither promoting or not promoting I'm just saying it's a future and it's something that's available if you need it Any questions for Steve? I'm curious about if you have a sense from those on the TSC and really anybody should feel free to speak up or put it in the chat on use of Garrett or not Steve maybe actually maybe people are completely clear maybe if you could just briefly describe Garrett and what it does for you sure, Garrett's a tool that has been integrated in with the GitHub and the Git repositories and it basically allows a control process around code review so when you use Garrett you end up in a situation where the code gets checked in they're free to configure requirements that it will go through some type of review and it literally enforces the fact that it requires two committers, you can configure in a variety of different ways that code does get reviewed and it gets scored and passed or it gets failed in which case it goes back to the drawing board it gets punted if you will and it goes back for the review it's part of the ecosystem that comes with Git so it's very common that the point that was made of folks that either love it or hate it is absolutely valid in that I think by nature developers either like code review or don't like code review and that almost always has a factor on whether they think it's a good idea or a bad idea but I'd say it's a very flexible tool in getting the type of reviews you need to be able to be confident that you have more than one set of lies on the code before it actually gets committed for the CI structure to pick it up and get things going on but not only that but it also eliminates the potential for a random code to be merged into the page to the extent that it can it does illuminate any let's not assume malicious intent but just mistakes getting introduced into the data that requires them to have them this is Dave, I think a project of our nature we should at least start off with something like this and if we run into lots of complaints or issues we could talk about them and if there's alternatives but we need to have something in place that shows evidence of code reviews and manages those type of potential issues in my view My opinion is I think it would certainly contribute to the perceived integrity of the project and the quality of the code coming through that's typically the highest value obviously code review has value in itself but as far as perceptions by other people contributing or participating in the project it certainly gains that The other thing that I've seen over the history of several projects it also bubbles to the top the folks that are really skilled in not using the term brocking are understanding the code base from a holistic viewpoint because what you'll find is in any system you have false positives somebody does a code review, they think it's good, it turns out to be bad then you've got a problem in the reviewer reviewer looks at it, they fail it, turns out that it's really good then you've got a problem with the reviewer it bubbles those kind of things up to the surface really quickly and allows you to adjust who's doing the reviews, who's available for review more importantly, who needs to be reviewed on a regular basis so all of those are twit knobs that you've been adjusted as part of the system And I think Steve, I think that's exactly right and it does help identify who actually because you're formalizing the review process and making it an integral part of what we do here it gives actually another way of participating you may not have enough cycles to but you may have the cycles to be able to review it and it's true and there's certainly projects where some of the folks that are really good at architecture and may have at least two other full-time jobs contribute with review time on a regular basis to make sure that the design parameters for the projector so that's not unusual to see in a project at all Any other thoughts that we've heard from Dave and I think it was Mick Well, I think that Garrett would be a good idea I'm sorry, could you repeat this? I'm basically sort of putting on the table do people have strong opinions one way or the other about Garrett and would they agree that introducing Garrett into our process would be a good idea so let's just put it that way so do people agree that introducing Garrett would be a good thing? Well, I'm thankful generally for my expertise but I really don't know what I decided Maybe we'll have folks have a chance to once again for our view from this, it's just a tour so if you guys want to take that away, something to discuss and review that, that works for me Steve, are you going to be at the face-to-face next week? I'm kind of tentatively scheduled to be there I can either myself or one of my IT managers can, you know, we've made arrangements to kind of be in if we need it Maybe we could think about maybe another lunch and learn that we could demo Garrett because again, I think some people may not be familiar with it so maybe that's another topic that we could entertain for Wednesday or Thursday Any other thoughts? I'm kind of surprised that we're not hearing others weigh in It usually elicits stronger reactions one way or the other I was kind of expecting I mean, it's just I gathered a carrot then used for the kernel No, we have other beasts that come into play Wrong personalities are involved there so that's a different animal entirely It is used very effectively from my experience with OpenStack and again, OpenStack is it's huge I mean, they deal with I think no, 40,000 patches merged for the Liberty release, which is just insane Each one of them has to be reviewed and two people have to plus two it that are core reviewers and so forth You couldn't manage a project like OpenStack without it Once a project meets a certain threshold the amount of chaos it gets it gets introduced without cover really just becomes unworthy So this is the ability I'm speaking of a stand So I just wanted to kind of say that I'm ready to have this for the project especially with this many potential contributors but I think there's a risk in setting up this early with it being seen as a barrier for contributions It is a great thing to have something like this but there's not enough ready maybe when it's already after the face-to-face after the hackathon once we have the more or less final form for the project after that having it set up but initially before we have a single line of code for the final product so to say it could be too early I think it could be seen as a barrier for contributions I agree with Stan I personally favor code review but my only concern is I feel sometimes it slows things down reviewers aren't very active and quickly reviewing stuff getting back to people so I just want to make sure we're making an effort to review everything that comes in as quickly as possible I think to your point Chan that is obviously a concern but it also then points to the need to have more reviewers and again a tool like Garrett is going to help us understand when there's an increased need whereas if we just use GitHub pull request pull request and so forth it can be a little bit harder to discern whether or not there's a lack of reviews or what the reason for things languishing so anyway I suggest people think about it maybe maybe we can schedule something for next week to go over Garrett so the people can take action and see what it does and we can have a longer discussion about the merits so we have and it's left Patrick I don't know if you're on still if you give us an update on where we are can you hear me great the link to the status went out in the minutes I'll post it again here I've just posted it so we have 15 workgroup members we're developing a template for the use cases the thought was we would start with use cases and putting them in a standardized format that format is mostly borrowed from OpenStack there's a pointer there to the template we haven't finalized it but we're working on it in the template there's an area called characteristics we're saying explain the user story relative to each of these characteristics and we tried to list several interesting characteristics of blockchains tradeoffs and choices that need to be made in blockchains we think so you might be interested in that list and if you see anything that looks wrong or should be added please let me know we'll build up a collection of use cases we're not going to formalize it with organizing them by industry until we have those use cases then maybe we'll do that we're looking for use cases we've got some from IBM's repo so we'll use those we've got one from Song Trust for the music publishing industry I've heard there were several discussed in the first meeting of the Hyperledger face to face but I have yet to be able to find those so if anyone has the minutes for those meetings or the use cases that were discussed I'd like to get them and then we do plan on doing a breakout session multiple breakout sessions next week at the face to face I realize not everybody will be there but for the people that will be there we'll do a breakout I'll talk to Todd and see if I can get some telecom support we could do some kind of telecom meeting we will begin formal regular meetings following the face to face though I would like to participate in the kickoff next week and actually I'd like to participate in a lot of the programming as well and I see other people on the list that I think I may want to as well so I will send an email to the group about how we're going to schedule work the parallel track and who will be able to participate any questions or comments? thank you Patrick we can get people engaged in this as well as in the white paper especially to start coming up with because Patrick is absolutely right there were a number of non-financial use cases that were discussed I don't recall Todd that there were minutes that captured the specifics of those I think we're at a little bit of a higher level but I know that Hitachi had expressed interest in some non-financial use cases for instance that's great and I would also like to get the financial ones commercial paper I'm not suggesting that financial ones are not important they are but I also know that there were a number of use cases that aren't necessarily completely represented in the TSC itself this is Ram from Cisco do we have a tentative plan for the other tracks and the resistance tracks and the white paper and the architecture tracks? I was thinking and again if people think that this is mistaken thinking I'd like to hear about it but I was thinking that it's less likely to be a case of those that are interested in doing use case and requirements or participating in the development of the white paper are a different set of individuals than those that would be actually hacking I could be completely wrong and maybe there's a lot of overlap there but I was kind of expecting that there would probably be at least two tracks one that was a developer track where people are going to be actually hacking on some of the problems that we kicked around a little bit earlier and collaborating together coming up with solutions and running tests and so forth and actually committing to the code base and then there would be a separate track or potentially two tracks one though based on what Dave was suggesting a little bit earlier where requirements to use cases can be discussed maybe in small breakouts focusing on one industry or another and then bringing that together refining on the green building consensus and so forth as well as potentially work on the white paper and then we would bring everything back together on Friday so I would expect that after the first morning when everybody is sort of in the preliminary part of the agenda that I discussed but starting Tuesday afternoon working through Thursday that there be these separate tracks That makes sense and I agree with the approach just try to see whether we need the entire three days for the parallel tracks or if we have a plan kind of for the requirements white paper architecture track then that would be good to kind of plan our time there This is going to be all the use cases and sorted out and completed by before Thursday to Friday I think there's times of work to do and even once you start identifying what they are then obviously you can start drilling down and getting some progress on refining them so I would expect there's a whole slate of work to be done there especially from the requirements these cases We're at the end of the job so thanks everyone Thank you Steve again for coming and enlightening us on the infrastructure and thanks Todd everyone for coordinating and we'll talk to you well hopefully we'll see many of you next week and certainly talk to you on the call next week Alright, thanks everyone Thanks Chris, thanks Todd