 Hi. So community training, we're going to go over a slightly modified schedule with essentially covering the same topic. So I'm going to have a talk about three different styles of training that have gone on in the community over the past, since the last cycle or during this last release. So we'll talk about Tokyo training first. Then we'll talk about upstreaming. I can talk about San Francisco that we went through a series of, what was it, ended up being six sessions in different user groups. Stefano does upstream training before the weekend before the summit, which has been really fun. I participated part and part of it this last round. So you want to start? I'm Kato. I talk to upstream training in Japan. Japan OpenStack User Group had upstream trains two times and short sessions four times. Our objective is contribution to community, not to use only, and active development from Japan. So we're more familiar support to attend in Japan. Our motivation, why not, why in local? Japan is too far, USA and Paris. So travel cost is low. In Japan, language barrier is a very important problem, including me. In Japan, community making help each other across company. And most important thing, teaching is the most effective study. How we started? We decided venue, date and time, and mentors, and students, and materials. We are weekday training and holidays, Lego trainings, full day time, and mentors from co-developers. And students are OpenStack engineers from many companies. This is our timeline. This picture is our Lego training rehearsal. We bought Lego myself and considered scenarios. And we consulted with agile trainers of Lego training. In Japan, original selecting guide bugs, selecting bugs guidance. Low hanging fruit tag is good to newcomers, but not so many bugs. So we select the bug from status, new confirmed, triaged, and importance is low, wish list, and decided. We learned three tips. We use public cloud services. Public cloud service is used to dev stack environment for students. This avoids trouble of network traffic, machine specs, configuration, and so on. Second tips is social gathering. After classroom, we go to drinking. Third tip is trouble shooting of registration to launch part and OpenStack foundation ID, account mismatch, contact information registration, and so on. So we want to share this experience and let's try to enlarge developer company, community, and we will support and advise to you in Japanese experience. And we are welcome to Tokyo next time. Thank you. So can I ask you a couple of questions? So the materials you use, obviously use Legos. I'm not sure there's probably not too many people that are familiar with what upstream training is, but obviously there's Legos involved, which is part of the agile training. Yeah, and Steph's going to talk about that. Were you able to use laptops in part of what you were doing, where people would show up with laptops, or did you exclusively use cloud instances for the students? Yeah, we are preparing from HP guys, donated HP instances, for better instances for each student. And how did that work? Was that easier, you think, for them just to use a browser and they didn't have to worry about any configuration or anything, and you got you up, everyone started faster? Was there any problems with it being remote? Yeah, we are, before the training, we are testing the network environment and connect to the HP cloud from the classroom and test network latency or network performance. And also we are preparing the DevStack configuration file because the configuration file may depend on the memory size or number of CPU cores. So previously, before the training, we tested the configuration file as easily fitted to this instances. Okay, so because of the prerequisites to run DevStack, it was better to... Sorry, because of the prerequisites for DevStack, it was better to have it preset up so you didn't have to worry about the student showing up with something that was not capable or would be difficult to support DevStack. Is that pretty much what you're saying? I can give also that experience in upstream training. And maybe we should go back and describe a little bit upstream training what it is and why we set it up. We have this program, we noticed first of all that contributing to OpenStack can be very, very complicated at first. The project is extremely complex. There are many projects that are different. All the projects are almost 40 at this point, 38 projects with hundreds of Git repositories that move very fast, very rapidly. So the usual approaches for open source projects, they're suggesting that defined around are defined small bugs and assign them to you and fix them and buy goodwill from the other developers by fixing small bugs. In OpenStack, even those small bugs or these small steps may be too hard to solve. And there is also one other issue when larger corporations or smaller corporations join OpenStack and want to contribute to OpenStack, they may not have the experience to deal with such large and complex social structure processes that have been put in place in five years of existence by the OpenStack community. So with those problems in mind, and others inside the community started thinking about a training program explicitly designed for contributors, for new contributors joining OpenStack, developers, engineers, not managers, just people whose job is to land a patch inside OpenStack, any of the projects. And that simple task, land a patch inside OpenStack, has two major aspects. One is technical. So how do I get all of my accounts configured? Where do I find the development environment? How do I configure that sort of stuff? How do I get myself to the point where I can replicate the bug that I want to fix or add the feature that I need to add? Where do I find documentation? Where do I describe a new feature or where do I find the bugs? So technical aspects. On the other side of things, there is a second problem, which is all the social norms, all the processes that are put in place. So for example, in OpenStack, we have a bug or a new feature needs to be approved by a technical leadership, the project technical leadership. And every commit, every code contribution, every patch needs to be discussed in the open publicly on a system that is called Garrett. And on Garrett review.org, we have public discussions about those bugs. For some cultures and for some societies, it's very easy to write a patch and put it out in the public space and have someone describe or give a vote to that patch that is most likely going to be negative the first time you do it. For other societies, for other cultures, having a negative vote immediately, it's a big disappointment. It can create cultural issues. And with that in mind, we designed this training to describe how to deal with these two aspects, the technical and the social aspect. We do it over two days. The first day is the Friday before the summit start, the Saturday before the summit starts, and it's mostly the technical aspect. The second day, we have this simulation of how OpenStack is developed and released with the milestones, well, first the design side, then the development cycles with milestones, release cycle and release candidates and the final release. And we simulated that with Legos. Ergo saw the pictures of the Legos on stage. And the second day is the Sunday. We've done it in Atlanta the first time. We trained about 20 people. We've done it again in Paris. And we had about 60, about the same amount of people this time in Vancouver yesterday and Saturday. And it was pretty good. It was fun. I got to join for just part of it. But for the Legos session, I got to join and that was a blast. So it was very, very similar. I was actually, I had really high expectations and it exceeded my expectations. It really almost immediately emulated the development community. People's personalities came out. People kind of assumed certain roles and it actually worked really well. It was really cool. It was an amazing experience. So I'm looking forward to replicating in other places as well. So can I ask a couple questions? I was going to be an instigator and ask some probing questions. And I can talk a little bit about what we've done in San Francisco, which is slightly different but aims to be similar in the future. So I guess you talked a little bit about why community. That was one of my questions. The other one would be kind of a follow-up would be how is the upstream training different than the available commercial training? So upstream training is explicitly targeted at this community and kind of isolated. It's designed to be exposing the dynamics that happen inside a collaboration when it's done across multiple cultures and across multiple companies. So one way that we simulate that with the Lego is that we have the room with three major groups, three major teams. One group is simulating the upstream community. So the PTL and the developers have already been contributing to OpenStack. And then there is another team that is simulated by a company that takes the role of a company with a CEO and the project manager and the scrum master having their own objectives. And then the third group is random developers, random contributors, the cow's monkeys, if you want. We give them the same objective. We have a piece of Lego that's like a corner of a block of a city. And we give them the task of completing the block. And they all need to come up with their own plans. So the first iteration, the three groups, they go and describe by themselves, they set their own objectives. For example, I want to expand and create a park or another group wants to create a new set of, a new building. But soon they will find, so they do the planning phase and then they start the execution phase. And at the very first iteration they realize that their plan needs to be coordinated first. So what happens is very nicely, it's very interesting how, for example, the CEOs immediately try to remove the cows and hire the random contributors. It's very funny. Then they start talking to the PTLs and say, hey, wait a second. How about you leave me this slot and I build on the other side. Yesterday, for example, another example was the company decided to build a gas station. And as soon as they revealed their plans, the PTLs and the existing community, they immediately went and said, no gas station here, we can't have it. It's polluting and noisy. So the company had to change their plans and to, luckily, they had in their business that they could build electric cars charging station instead. So all these dynamics are exposed. And it's different because companies usually want to train on something that is generating revenues. And in this case, to go to your question, in this case, what we really favor is to exposing the value of collaboration instead. How you can across companies by joining forces, you end up building a much larger piece than you were if you just could use the funding and the resources of one corporation. Yeah, one kind of a side note, one of the groups that was formed as a company, they decided that they were going to just contribute resources. So they decided they gathered up all the pink legos they could find. And they showed up to our table and dumped a whole bunch of pink legos saying, here you go. It was kind of, it was the whole thing was very fun. So it would be another good question. So where does the training usually happen? I guess we have two examples here. One that was, well, I'll let you answer the question. Well, for us, I try to set it up before the summit because it only extend already a lot of people are being sent by their corporations to the summits and I try to capture some of them for another, you know, by extending only a couple of days, their stays. That's one way to do it. I'm going to keep doing it. We have done, first time, the rental, the volunteer from our one company meeting room. We are just 20 people attended our training. So the second time is in Japan, we have been doing the open-stack day in Tokyo in this February. And just a hotel is this October's place, same as October's place. We are co-located with the event and using to doing the upstream training second time. Our upstream training is very community-based. So trainer or mentor or operators or their, all members are volunteers from the companies. And it's a very rare case of collaborating the members from different companies creating one training and also a student who have also collaborated in the training of Lego or each other asking a question about how to do the dev stack or how to recover the trouble. From my experience that is very rare case and very community, this is a community, I think. How do people get involved, Steph? That's a very good point. We have regular calls for mentors and because mentors and teachers are all volunteers also for upstream training. The only person that is sort of, well, it's my job to be, to organize it. But the end of foundation provides for the venue, the food, the cost of the Lego. But yeah, everything else is provided by mentors and students and teachers who sign up for help. And we publish calls during the year to get more people involved. So if I wanted to set one of these up, what are the minimum requirements? Do I need like two consecutive days? I think that's what you typically do for the summit. And is it four-hour days or is it eight-hour days? That kind of thing. I understand the venue, all that part makes sense. But what's the actual time commitment? So for us, it's two days, eight hours. But I'm really starting to think that it's a lot of work. It's a lot of work not only for the teachers and the mentors, but also for the logistics and all that. And for the companies that are sending people away, it's two extra nights. Sometimes it's a lot. So the very minimum, I think that you can do with a six-hours, one day in six hours. But you would require preparation before that. Because I think that the technical aspects are not really that complicated. That can be at least partially done by people themselves before coming to the live sessions. It requires a little bit more preparation, but you can do it. And the legal part can last two hours, two hours and a half with, by allowing more time to do the preparation before the retrospective afterwards. And depending on the practice on your facilitators in the room, the role of those facilitators is quite important because they can help triggering some of the behaviors, social aspects, interactions. I did enjoy the frenetic energy that happened having it in 45 minutes. I mean, everyone was just like running around with their hair on fire. It didn't give anybody a chance to catch their breath. That's true. I think that actually was a good thing. Nobody really sat and thought about what they were doing in the broader sense. They just had a mission. They went off and did it. Yeah. So 45, 50 minutes is the three iterations, but you need at least 20 minutes before and 20 minutes after for the setup and for the retrospectives. That's where you're two hours. Of course. I'd like to know if the courseware is registered by some company. Who owns the IP associated with the method you mentioned? For upstream training, all the material is under creative commons. It's open source. It's open source, creative commons. So actually, that's an excellent segue to my next question. Thank you for that. So I was going to ask, where is the content? So it is licensed in the creative commons. It's in a project called Training Guides. It's out under OpenStack. If you go to GitHub, it's slash OpenStack slash Training Guides. It's co-located with some other training material that some people that have been working with me have been working on for about a year as a part-time project. We're solely actually, Steph has done such an amazing job with upstream. We actually completely changed our format and we're merging what we had into the format that Steph has. So it's going to be a lot more similar. Why don't you talk a little bit about the training guides? Yeah. Yeah. I wanted to do that. So I can talk specifically about what we did most recently. So when did we do it? We started in September. Well, yeah. We actually run it like a regular team. So it's an open source project like any other. We have weekly sprints like any other project on IRC. We use the mailing list. It's a regular project. The output just isn't Python. It's RST and some shell scripts to run a training cluster. So without getting into too many details of how it works, it's essentially to train people up to a certain couple levels of experience using some material that we put together. And we're teaching on OpenStack. So it's essentially teaching OpenStack to be an operator and or an administrator. We have two versions of the material out there right now. But what we did most recently is, starting in September, we ran six different sessions in San Francisco out of HPE's Santa Clara campus. And we had some volunteers that volunteered as teachers. And we went through six different sessions going over the material. We summarized it into slides. We used to have it in more of a book format. But after some trials and tribulations on getting people to use that and actually making it consumable in a user group setting, because that was really always our intent, I've been working with the user groups for the last couple of years and organizing the San Francisco one. And we wanted to teach people OpenStack rather than just kind of give them talks. We wanted to make it more participatory. So we turned it into slides as a result of that where it was more digestible in that kind of format. Six different sessions. They were attended anywhere from about 80 to 150. It was really great participation. One of the things we didn't get to do this time, it wasn't, it was, it was participatory where we had back and forth from the crowd, but we didn't actually set up any resources. So that's something that we, we aimed to fix. So I'm working with the team. We actually have a meeting later on this week to figure out how we can incorporate some of the cool stuff that Steph has done into the user groups. We could, we could just reproduce what Steph has done with OpenStack upstream. And or we could try to morph in some of the different pieces as like a, is a longer running version of what upstream has done. This is very, very tight over two days. Generally what people that show up to the user groups, there's different expectations. And usually having sessions over multiple, going into details on how to actually operate a cloud is what a lot of the people at least in my area are most interested in. So being able to provide both is what we're looking to do. So I kind of ramble a little bit, but not at this moment. There was a revamp of the site recently. It looks really cool. But as part of it, we kind of dropped out because all of us have full-time jobs. So for upstream training, we are publishing the slides on the site now, on the draft. How do you get access to it right now? docs.openstack.org slash draft upstream training. You can find it. And in the next couple of days, I will send a patch for linking directly from docs.openstack.org. Yeah, the one thing that I wanted to add that I forgot to mention is that upstream training actually continues after the two days training. I mentioned that we have mentors in the room. And the mentors work directly with the people, the participants during the session to help them pick low-hanging fruit bugs or create, select other bugs that are easy enough or within reach by the attendants and set a date for a next meeting online to continue working on that bug until, and we consider done, completed the training after all the people that have attended have either dropped out or completed a bug, completed the submission and become an active technical contributor to OpenStack. Yes? Yes, we've created two other training guides. One is administrator, one is operator. It's available in the Ice House and Junior version. We haven't updated it yet for Kilo. So this has been, for the people that I work with, this has been actually a kind of a side project. We don't do training. In fact, I think there's really only one person in our team that actually is a teacher, teaches at university. The rest of us were just doing this as a part-time because we were working with user groups. So I'm actually looking to, I just changed jobs, so I'm actually looking to incorporate our training for our product, Metaconda, and with the training that we do and upstream training. So I'm going to try to morph that in so that I can get some of my devs to contribute a little bit, give a little bit more umph and see if I can get maybe some other companies to copy us. So make it a little bit of a part-time, you know, one part-time gig. Yes, the question is, what is the target student? As you understand, it's more a potential OpenStack developer. It's not really an OpenStack user. Is it right or? So for upstream training, we teach contributors, yes, developers of OpenStack pieces. But for the training guides, there are two other targets, operators and cloud administrators. Yeah, so the intention of the guides were always to teach four different escalating levels of experience. And what Steph has done is essentially, well not essentially, to teach developers. And that actually ties in relatively well with the other two guides that we created. Right now, they're not organized that efficiently because the formatting is slightly different and the voice is slightly different. So we'll be working on that over this next release cycle so that there's more flow and hopefully it's more consumable by other user groups. So that's always been the intention that the user groups would be able to use this stuff and that's why we were publishing it. And also, anybody could download it anytime they wanted to. If they wanted to use it for commercial purposes, go for it. And that's exactly what I aim to do with my company. I'm going to morph some of it in with building DevStack, building part of it, teaching some basic, explaining basically how OpenStack works and then building our, how to build our L3 orchestrator for Neutron on top of that. So morphing the two together. So hopefully more companies follow our lead. Hi. My name is Tony. I'd like to get involved in what you guys are doing. I actually have experienced training OpenStack. I've done about six classes, 20 to 50 students each class over the last 18 months. I used to work for Red Hat in their partner name and group and various internal training. I developed the content using a lot of the material that I found in Docs at OpenStack. So it's great content. I did change it quite a bit and I have, you know, all that stuff ready in my laptop. I live in the Bay Area as well. I've just been traveling around doing stuff and I don't like to start doing something, you know, at home so I don't have to travel too much. If there's a chance for me to come out and do something with you guys, I'd love to do it. I also have a data center that I put up on my own cash and stuff, which I wanted to try to get back to the community a little bit. So if there's some opportunity there, I know I showed up late, but sorry about that. I'd like to do something. No apology necessary. So yeah, we'd love to have you participate. So there's Monday morning trading guides meetings. So if you just go to, if you search in a browser for OpenStack meetings and training, you'll find it. It'll pop right up first. We have weekly meetings. We're not having one this week obviously. But yeah, just show up on IRC and that's the easiest way to get started. I'm in the Bay Area. The other people that are on the training team, well, Steph's in the Bay Area, part of the time as well, they're outside of the country. So we kind of coordinate. Long story short, love to have you. So please. Great. Thank you. Any more questions? Okay. Well, pub crawl. There's beer to be had. Thank you. So thank you.