 Ladies and gentlemen, thank you for waiting. We are delighted to have you are here We are now ready to resume a same-net program for this NHG break-art speaking truck We'd like to invite Mr. Yuji Azamu, Mr. Genshin Man, Ms. Yuiko Takada, Mr. Hidekazu Nakamura, Mr. Motohiro Otsuka, and Mr. Masayuki Igawa to talk about OpenStack traps of developments and use cases. Hi everyone. Thank you for coming to our session. So this session is OpenStack traps of developments and use cases. So and Sorry this is this session is lightning talks and we will talk about about the NEC developers and the users will talk about traps, not only OpenStack developments, but also experiment of class construction and operation. So the individual sessions are limited to up to five minutes. So our session will be about it if it will be over five minutes. And I will ring the bell like this. That's the signal. The session is ended. It's time. So the speakers are Yuji Azamu, Genshin Man, Yuiko Takada, Hidekazu Nakamura, and Motohiro Otsuka. So the first session is Yuji Azamu. Please talk about this session. Okay. Hello, everyone. My name is Yuji Azamu. Today, I'm going to talk about trial and error of SFC OpenStack. This is what I will be presenting today. I work for NEC solution innovators and belong to Okinawan Open Laboratory team in NEC. The Okinawan Open Laboratory is research and development organization of SDN and cloud. We have been researching SFC on our test vet since 2014. I'd like to deploy service function training on OpenStack. However, it isn't been established yet. So I tried several ways. I introduced outlines of two ways from among them. At first, we tried use current neutron network model. All SFCs are connected to patch VM. Patch VM is a NOVA VM that have an OpenV switch. SFC is controlled by patch VM. Patch VM and SFCs connect to C plane network. Our operator configures SFC via SFC manager using Ansible. However, these ways have been complicated points. We need many steps for deployment and configuration. Moreover, this way is consumed many resources. Network and port are consumed in proportion to just number of service function. These problems were serious than I thought. Then, we thought we can be simple more. The trial two solved problems of trial one. We added new API that called flooding rule. An operator can control SFC by this API. And each SFC are connected just one network. Flooding rule consisted of flooding rule resource and crash fire resource. This is implementation of flooding rule extension. VR int is integration breach of neutron. We insert a VR patch between VR int and SFC. An operator can control SFC by neutron API. I thought trial two is good idea, so I proposed as a flooding rule to the community. This is my first time contribution. I'm very excited. However, it's abandoned. This is because community already had some similar specs. For instance, API for service training, traffic staring and so on. I should have an interest in the community trend. Currently, the concept of flooding rule was separated to flooding rule spec and common crash fire spec. However, flooding rule spec already abandoned because networking SFC subproject has launched. Common crash fire will be used to various services. For instance, security group, QoS, networking SFC and so on. If you're interested in SFC or common crash fire, please review my spec. Thank you for your time. Next speaker is Baansan. Hello, everyone. Thanks for coming here. I'll be talking about the OpenStack activity and how they are different from the proprietary software development. Let's start. This is the agenda for my PPT. I'll be talking about OpenStack activity I'm doing in Tempest and NOAA and the differences between Open Source and the proprietary software development. Like, first OpenStack project, new to cloud domain, time zone, et cetera. I'm a Ganshya man, software developer from NEC. I'm in community since 2012. I'm a core developer of Tempest and active contributor in NOAA. Let's start about my OpenStack activity. This is the Tempest one. I started with improving the test coverage of compute APIs and then implementation of response, decision schemas, then service client improvements, and normal activities like reviews and bug drives. These are the NOAA activities. I contributed in NOAA v2.APIs, which are current now, and I participated in NOAA micro version design, helped the functional test improvement, and improving the NOAA API using micro version and APIs doc, which is the highest priority in this metakar release. Let's start the differences. When I joined this OpenStack development, this was my first Open Source project. Previously, I worked in proprietary projects. Before OpenStack, I heard about Open Source thing, but I have never worked or never explored those things, what actually Open Source is and how community get involved in the developments. I was unfamiliar with how Open Source community works. Big challenge for me in OpenStack was that only. There are a lot of people from different time zones, from different organizations, and they work closely. How they coordinate with each other, how they get the conclusion about design, implementation, planning, because if you have a lot of people from different thoughts, you have a lot of thoughts, and it's very difficult to have the conclusion, so which way you want to go. Those were the questions in my mind. How things helped here? I worked with OpenStack Experience Developer from NEC. We have all here, and from other organizations also. There I just got the answer from those questions, how they coordinate with each other, and how different time zone people work together. Next, I found Ptl and the Summit plays a very important role in the project to get the conclusion of each development cycle. For example, in the Smith-Akar release, every project's Ptl and their team will get together. So they'll have the design session of each feature, they will discuss, and they will get the conclusion, they will have the planning, where to develop, who will develop, and in which release they have to fix. So these two things play a very important role, and next is project team coordination, like with the weekly meeting, IRC chatting or mailing list. Through those, they coordinate with each other, even with the different time zones, so we have alternate meeting with the ACS-specific time zone or US-specific time zone. And last, but not least, is OpenStack.org. They have the great Wiki and their great documents there. So if you are very new, so you can get to know all these basic answers from there. So after those, I love working in Open Source, and it might be difficult for me to, again, work in proprietary project, because Open Source is too much easy, you can say, and very challenging and interesting to work. And next challenge was, I was new to Cloud Domain, so before OpenStack, I worked in storage and avionics domain, and those were different from virtualization and Cloud Domain, so this was the one of the challenge. So how things helped here? The best learning source is working closely with other developers on code design and reviews, and the reviews is the best, I will say, to learn about any domain or how things going in that technology and how future planning will be there. And the source code reading is the best way, anyways. And as experience goes, I learned about the Cloud Domain. Still, I have to learn a lot, I mean, I'll say the gesture starting for me. And next big challenge is time zone, which is I think the biggest issue while working in Open Source. And because people from different times don't find difficult to coordinate with each other, like in Japan we work in this GMT plus nine and US people works when we are sleeping, so it's difficult if we want to talk or discuss with them something. So that is the biggest challenge, and okay, so we'll just finish. So these are the time things. We just finished with IRC things and all. And the next time management, how we faced and we worked with this like morning reviews and all. So this is all summary, that's all from my side. So let's call Nakamura-san here. Thank you very much. I'm Hidekazu Nakamura. I have been working on projects related to construct the earth using OpenStack since SX release. I was aware of Open Source community for the first time. OpenStack Summit has finally held in Japan, so let's go back to my first OpenStack Summit. Thankfully, I attended OpenStack Summit recently. I'm overwhelmed by beautiful San Diego and the number of developers. Have you ever been to San Diego? No? Many developers are talking with each other in English, but I realized the fact that developers are same human beings as me. Of course, as you, community is composed of same human beings as me. I tried to foster patch to community. My patch is to remove only single slush from default input value, but merged. Status merged encouraged me very much. Since then, I contributed some patches. So, many Japanese developers are coming to the summit. My prediction, the number of committers in Japan will be double. They must be going to commit to OpenStack after the summit. Next, talk about issue. Island performance test, boot 15 VMs almost at the same time. Several companies has explained tuning parameter at events, but I couldn't understand which parameter combination were valid to our environment. So at first, we executed all default value, but at most, 15 VMs could be booted. So solution, we repeatedly tuned by referring to parameter value, which is explained at OpenStack events for three weeks. Not only tuning, we added Newton API node because RPC worker does not implemented in GRIZLY release. And finally, we got it. A parameter listed here for memories. These values are valid for GRIZLY only. I mentioned about GRIZLY release, but now Nova and Neutron are synchronized. Neutron implemented RPC worker processes and much faster, so I don't test, but I believe solution is upgrading OpenStack. Finally, based on to land, OpenStack community is open. Community is composed of developers like us. Oh, I missed integrator, I missed integrator. Contribution process is well documented and there is upstream training for new contributors. I attended upstream training before the summit. It's good and there are reliable resources about OpenStack. Many events related to OpenStack are held. We can get real examples about not only development, but operation. That's all, thank you. Next speaker, Yuniko Takada. Good afternoon, everyone. I'd like to start my presentation, the title is Diversity or OSS Community. At first, I will introduce myself. My name is Yuniko Takada, and I'm a software developer at NAC, and I'm working for OpenStack. And also, I'm contributing to IronEek and IronEek Inspector project. And then I'm a core developer of IronEek Inspector project. Then, have you ever posted a patch to the community? If you have ever posted a patch, as you know, there are a huge number of patches which are waiting for being merged to the community. And also, as you know, sometimes it is very difficult that your patch will be merged into the community, right? And then, I'd like to introduce my experience when the importance of my patch was not understood correctly. One day, I tried to use IronEek on Japanese OS, but it failed. The error log is like that. The reason was, as you can see, as key codec error. Some message catalog of linux are translated to many languages. And this error was occurred because IronEek doesn't use standard locale. Then, I posted a patch, fixed this bug. My solution for this bug was, use standard locale for all linux commands. The reason is that, currently, message of just one command is translated to Japanese, but some of the many more messages will be translated, no doubt. In this case, every time, my running fails because of this error, we have to fix it every time so that it's so ridiculous, right? But my idea was not approved because people in community was afraid that by changing standard locale, something will be broken. But I think that if there are people from many countries in the community, they can understand my idea correctly. And some issues similar to my experience occurs, it's same as Man-san said, so that, for example, time zone issue, most meetings are US or Europe time zone centered, right? And also, we are facing a language issue. In OpenStack community, meetings are held on IRC, and the design summit is held as face-to-face conversation. It is very difficult for non-English native people. These issues are introduced will not be noticed or resolved unless people who have trouble insist on. Listen, diversity will be developed by people from many countries will join to the community so that everyone lets join OpenStack community and let's make OpenStack better. That's all, thank you. Next speaker is Motohiro Otsuka. Hi all, and welcome to our LTE LTE session. It's very difficult to come here, so thanks for your effort to come. Today, I'll introduce about developed Magnum with Traps and Difficulty from NSC. Agenda is here. The first interaction, who am I? I'm not sure who am I, but I only know. My name is Motohiro Otsuka, known as YangYin on OpenStack IRC channel and Twitter and GitHub, and developed from NSC solution innovator. And also, I'm a co-developer on Magnum project until last day or last year. Do you know Magnum? Maybe not, right? Right? So, Magnum project is known as content as a service. But actually, Magnum is an installation service of OpenStack projects like TRUBE. So, Magnum is responsible for installation Kubernetes and managing Kubernetes and lifecycle of Kubernetes and Docker Swarm. Hi. Only recently, we released Liberty version of Magnum, but during developing, we came across a lot of difficulty and traps. As a previous summit in Vancouver, a PTL on Magnum project said that Magnum will become a production ready state, but that time, Magnum was a very, very, very immature state. So, I'm confused, it's really, really? In the first place, Kubernetes instance which is installed by Magnum doesn't have authentication. It means that not only tenant user, but also anyone can use Kubernetes instance which is installed by Magnum. So, I say, can you use Magnum in your production implement? Maybe not. Kubernetes uses TRS authentication to authenticate user. The works that was given to us was implementing TRS support in Magnum project. Initially, we are thinking it's very easy to implement. Magnum uses heat template to set up Kubernetes instance. So, we believe that complete all a fixed heat template, but we forgot about client side. TRS client authentication is required client certificates which is surrogated by same certificate authority. Heat can't provide client certificate to user. With that manner, we decided that client certificate is generated on Kubernetes node. It means server side. I missed my script position. So, Kubernetes node generates the client certificate and user can get the client certificate from Kubernetes node using SSH. Very... Okay. Yeah, but we have a lot of troubles. The most many trouble is my colleague was migrating. My colleague was moved to other company. She was mainly implementing this feature. It's very difficult to have to implement this. But finally, we're implementing this. So, please use Magnum and judge. You can use Magnum in your production environment. Thanks. It's the end of this session. If you have any questions, please talk after this session. Thank you very much.