 So I'm gonna be talking about localization, right? So localization has been quite a big headache ever since I started working for Linux Foundation 11 years ago. So my interest is how we can make more people work on localization with a lot of joy and with a lot of productivity. So that's the goal of my activity related to localization. So this is the key topics of this talk, right? So one of the things I would like to touch upon is how technology can actually solve issue with localization, right? So localization has been largely manual work but by taking advantage of technology so we can actually make it easier and probably more fun. So that's one thing I'd like to talk about. And the other is, okay, then if we take advantage of technology so what are the tools, technology tools that we can take advantage of? So that's the other thing I'd like to touch upon during this presentation. And the other thing probably the most important thing is how we can make the localization work more fun and productive. So those are the three things I'd like to talk during my presentation. So as I mentioned earlier, localization is a big headache for local open source communities. So the impact of the language barrier to the industry in non-English speaking country is actually huge. And this is particularly true, unfortunately, in Japan because Japanese people's English proficiency is one of the lowest in Asia, ranked 53rd in the world. And without localization, the new technology originated overseas will not be spread in non-English speaking countries, again, like in Japan. So we need a localization for local community to catch up with the latest technology trends overseas. And it is impossible to make everyone to become able to communicate, reading and communicating in English language, but it is possible to localize more documents and localize a lot faster those documents by taking advantage of technologies. So localization is a big headache, but I would like to solve this problem with technologies and community. So in order to solve this technology, in order to tackle this issue, I would like to, first of all, understand what is the localization work? So localization work is actually made up by two elements. One is translation, the other is interpretation. So translation is a relatively lower value at work because this requires only language skill, whereas interpretation requires several elements. Specific technology background or technology, technical knowledge to be able to, for example, if a new technical term were created in English, then the people who does the interpretation work will have to understand what is that technologies and what would be the best interpretation with terminology that can be used in, for example, in Japanese language. So this is actually a really high value at work, but today localization work, localization work, both interpretation and translation largely manual work by human. But in the future, among the interpretation and translation in between interpretation and translation, this lower value at work translation, it can be replaced by machine, the technology while interpretation will remain in the human work because this requires a very specific technology or industry skill or knowledge. So in order to make localization work more productive and fun, we need an open and common platform for open source communities. So to be more specific, we would like to, we need a tool, we need a common platform to make the translation part, not interpretation part, translation part to be done by machine or tools. So my hope is that we'd like to get our force together to build a common platform to automate the translation part. So what we need in order to make that happen, what are the requirements? So I'm thinking these three are the key requirements to make that happen. One is everyone to use the tool to translate and share the translation memory. This is the important part. Not only just to do the translation, but share the translation memory. That's the one thing that is important. The second part is trained machine translation engine that is optimized for open source industry, right? So there are machine translation engine out there, probably not many people actually use it, but in order for us to be able to efficiently use the engine, the engine will have to be optimized for open source industry usage. And third, and again, this is the most important part, build the mindset and process that the translation will have to be done by the community. And no, never do it at all. This is important part. And share the result together with the community. So those are the three things that has to happen for the localization work to become more fun and more productive. So what is the, I mentioned that everyone use the testing to translate and share the translation memory, right? I said, share the translation memory is quite important. What is the translation memory and why it matters? So translation memory is basically a database of translated words. And it can be a sentence or paragraphs or a sentence like units, like heading these titles, elements in the list that have previously been translated in order to previously translated. So this is basically the translation memory. And one good news is that, there is an industry standard file format for the translation memory exchange TMX file format. And so people are actually able to generate the translation memory with the standard file format and actually share the memory and make the translation memory better by collaborating. And also, regardless of the tool people use, they can actually generate the same translation memory and share it. So that's one good news. And the benefit of using translation memories, there are several memories, several benefits. So one is the translation tool will translate the same sentence words or sentence like units automatically and able to avoid from translating those sentences words and sentence like unit over and over. So this is actually a big benefit. And actually by doing that, specific terminologies will be translated into the same local language. So this is actually important. For example, English words with a container, maybe translated, person A might be a translate into one way, but a person B might translate into the other way. But if the community got together and use the same translation memory, then this issue actually be solved, right? Container is a container. There is only one single translation interpretation everyone using within the community. And also, we can, by using the translation memory, it's actually a similar thing, but we can actually set the rule by the translation rules by the community. So community might use one terminology. They would like to keep that terminology as in English, right? And maybe some other terminology needs to be translated into Japanese, but, you know, so community can set the rule that this term stays in English, this term we translate it, something like that. No, the community can set the rules by using the translation memory. So this is the big benefit of using the, these are the big benefits using translation memory. So it actually matters a lot. So what to do with the translation memories? So this is what I'm thinking the ideal process of using it. You know, first of all, get the translation memories from the community every repository. Then, by using the translation memory, walk on the translation. And the person will create a new translation. So share that new translation memory with the community. And then, you know, merge the new translation memory with the existing translation memory and update the community translation memory in the repository and not keeping this cycle, right? And by repeating this process over and over, the community will get more and more translation memories and less and less duplicated translation. So this is the ideal process. So let me touch upon some of the available translation tools. So one of the tools that's widely used is Omega T. This is an open source translation tools that anyone can actually download and use. And I mentioned that, you know, there's a standard translation memory file format, PMX. But there are actually several other file formats. And Omega T is compatible to many variety of file formats, like TTX, TMXL, and others. And also, Omega T is able to translate over 30 documents formats, such as Microsoft Word, Excel, PPT, STBL, and others. And of course, an open document format. And also, one of the good things about this Omega T is it has got an interface with popular machine translation engine, such as Google and DPL. So when you use Omega T, use the community translation memory is one thing. And the other thing you can take advantage of is use the machine translation engine, such as Google and DPL. And also, one good point about, you know, one benefit of using Omega T is that the tool works well for group work. So if you would like to use a tool of the community translation, Omega T is actually a one good option that is suitable for the nature of the collaborative work. But one downside of that tool is actually, this is actually not an easy tool to learn and use. This is actually one downside that Omega T has, but this is really a great tool. And, you know, at Linux Foundation in Japan, obviously, we have been using Omega T a lot over the past several years. And the other available tool is a Texture Transvation Editor. So this is actually an online tool provided by a national institution of information and communications technology. We call it, they call it NICT. This is the Japanese government, Japanese funded research institute. And this Texture Translation Editor is not an open source tool, but anyone can use it for free. And the license term is actually very clear for open source usage. So what it means is, if you use, you know, like a popular translation engine. By the way, so this Texture Translation Editor, if there is a, this is the editor that you can see, you know, you can see a graphic on the right. This is the editor. But behind this, there is a translation engine, Texture Translation Engine, right? So this is a machine translation tool. So if you use other machine translation tool, then you might run into an issue, like, you know, what is, you know, if I use, who is the owner of the outcome on the translation, right? So machine translation, you know, you actually are not the one who actually do the translation. Actually, machine did it. So in that case, who owns it? No, is it free? Is it, you know, is it okay to share the outcome of the translation with anyone, right? So this is actually not too clear if you use many of the translation engine. But this Texture is actually really clear that, you know, we can actually use it for an open source project and share the outcome of the machine translation for free. So that's one thing. And once again, this Texture is taking advantage of machine translation engine. And one good thing about this is no need to install a software on your PC. So this is actually quite easy. And just like Omega T, it is capable to translate many different file formats like Omega T. And one good thing about, you know, this Texture is capable to use a wish wake mode. So this is something not available with the Omega T. And one, I personally feel, you know, one of the biggest benefit of using Texture is automatically merge the translation memory in engine and get trained if everyone use this. And also capable to generate a grocery from the translation memory. But, you know, however, one downside, one issue is, you know, in terms of group translation work, this is not the best solution at this moment. So that's the one downside. So I brought one of the cases study where actually a community got together and use the tools and work on, try to build a process of the community translation, which is the Hyperlaser Japanese Documentation Working Group. So they, you know, they are the group of the translator and who work on the Hyperlaser fabric documentation localization. And the group, you know, the group is using the GitHub to collaboratively translate the documentation. And also the group also partly use the translation memory, which is maintained on the GitHub. However, this is the issue. However, they are still facing difficulty to update the translation memory by aggregating feedbacks from the community translator and merge them to update the existing translation memory. And also, you know, deleting the word, which are not appropriate. So, and the reason why this happened is because today it is simply not easy to share a translation memory and maintain at one location. So, although this Hyperlaser Japanese documentation working group is trying to do the way, the way I described in the earlier slide, but they are facing actually running into the issue of actually doing it because it is actually hard to actually, you know, it's actually share the translator, generate the translation and share it and maintain it. So we are going to make one step forward to remove that needle of localization community. So we identify that this is the issue, right? So it's difficult to, you know, although it's important to share the translation memory, but it is not easy. So this is a needle, right? Of the localization community. So I would like to do something about it. We are going to make one step forward to do that. So today I'd like to make one announcement about NICT. I mentioned earlier that who actually developed the TXTRA translation engine. And the Linux function will collaborate on TXTRA, right? So by using the tool, by using the community translation, we should be able to get a lot better productivity, obviously. So with a machine translation engine optimized for open source usage, the productivity can become much higher. So this is one thing, you know, NICT and the Linux function in Japan actually like to achieve, make the, create the, the translation engine that's optimized for open source usage. So to do that, you know, NICT and the Linux foundation has reached agreement to work together to train TXTRA machine translation engine to optimize it for open source community use. So that's the agreement and this is the announcement today. And moving forward, what we are going to do is the Linux foundation will host TXTRA at the Linux foundation server and which will be opened up for the local open source user community to use. And also NICT and the Linux foundation will collaborate to tackle, to add the group work feature to TXTRA. As I mentioned that, you know, TXTRA is weak at, you know, group work functionality. So we are going to, we are going to try to add that group work feature to TXTRA. So community can easily share the translation memory and also build an ecosystem around the machine translation such as a commercial tool or open source tools for CMS, you know, content management system, like, you know, WordPress-related tools or and also get repository for instant messaging social, I mean, instant messaging or social medias and so on. So together with the NICT and NICT, the Linux foundation, we try to solve the needle, maybe solve the issue of issue with the difficulty of sharing the translation memories within the community. So the value of using optimized machine translation engine is like this, right? So as I mentioned, it is difficult to actually generate the translation memory and share it and merge with the existing one. But by you, you know, and with that machine translation, you know, this process, you know, community translation, this five circle circles, largely manual work, right? But with machine translation, these steps can be automated because translation engine can take care of, you know, basically, you know, sharing the translation memory and merging with the existing one and updating it. This is done by machine translation, machine translation engine automatically. So this way, we can actually solve issue we faced with the Hyperledger, Hyperledger Japanese Documentation Workgroup, right? They had, they faced the difficulty of sharing it, but no, it will be automated with the machine translation engine. And finally, this is my final point. After all, people matters most. You know, we work on the tools, we work on the translation engine, but after all, people actually matters most, you know, to be able to make the localization work fun and productive, right? So one of the things that we have to do is build a mindset and the process to share the translation result and memory. So mindset is basically, you know, localization work as fun, collaborative work. It's not like, no, something people do it alone, you know, alone by themselves, right? And also a common repository, not only for the localization outcome, but we have to share the translation as well. And also a work process. It would be good to have maintainers, commiters and reviewer and sign off by process. Just like, no, we have, you know, just like we have for developing open source software. So this is, you know, this is the one thing we have to do. And also, you know, community localization can become more fun, because learning through the localization. Now we are able to learn new things, like, you know, many of the technology created in English language. But, you know, the people who actually do the localization is actually, you know, the people who are actually learning earlier than anyone else. So this is one good benefit translator will have. And also by, you know, taking advantage of open source way of translating, the translation will become collaboration with others, not in a standalone project. So we can get feedback to the work and we get a higher reputation and we get an appreciation for the community. So this is one good benefit translator can get if we do it in a community way. And if we take, you know, we are able to take advantage of technology, we can actually not eventually focus on high value edit work, you know, interpretation part, rather than, you know, you know, translation part. Because the translation part will be taken care by the machine. So, you know, people can focus on interpretation of new technology, you know, understanding the context of new technology and create the appropriate world for the local language. So this is actually a fun part. So these are the things, you know, we have to promote and we have to make people understand, you know, that for localization work to become more fun and productive. So if you're interested in please join us, we have the collaborative translation meetup in Japanese language. This is linked to the meetup.com we host. Since, you know, due to the COVID-19, we have not done the meetings last nearly a year or so, but I am certainly hope to re-energize this community. And also if you would like to know more about the status of, you know, this activity, please reach me out at my Twitter account at Nori underscore Fukuya-su. And one final word that I would like to leave you with. The Universal Community Translators Unite and share the translation memory. Thank you. And if you have any questions, please let me know. If you have any questions, you can tap in in the chat windows so I can answer the question. So in the chat window, Miura-san, Hiroshi Miura-san has been adding some comments. Actually, Miura-san has been working to connect Omega-T with the textual translation engine. So my hope is, no, no, we can connect, we can use, we can still use Omega-T and use the translation, textual translation engine. He actually mentioned that in the chat window. Any questions? I guess now I can conclude my talk. And if you have, once again, thank you for joining my session. And if you have any questions, please reach me out at my Twitter account or participate to our collaborative translation meetups. There's one question. Does textual agreement is not only for Japanese English or other language? Yes, it is. It doesn't specify for Japanese language. Any other questions? Okay, thank you so much for joining. And I am looking forward to hearing your feedback somehow, and I'm looking forward, I want to join our community. And also, I really hope that you will enjoy the rest of the event. Thank you so much. All right.