 The microphone is working. Thank you all for coming. I know it's right after the lunch break, so happy digestion. Sorry for the delay. We encountered some technical difficulties. It's great that we are talking about new technologies and edge computing during this panel, and we are having troubles with even a PDF file to get on the screen. But we will do our best to catch up with the little delay, and we may get into the break for, I don't know, two, three minutes, depending on how the panel is going. So we will talk about the OSF Edge Computing Group during this panel. We have quite a few panelists. I will let them to introduce themselves. We will start with that, and then dive into the details. So myself, I'm Ildiko Vanca. I work for the OpenStack Foundation. I do a lot of things, and among those items, I'm one of the co-leaders of the Edge Working Group, and I'm focusing on mainly Edge and NFV, topics within the foundation, and we have slides. So I let the panelists introduce themselves, and then we will deep dive into what the Edge Working Group is and what we are doing and what you're here to learn about. Hi, I'm David Patterson. I work with Dell EMC. I have been working with OpenStack for about five years now, in the last, say, eight months or so, concentrating on Edge and implementing Edge technology through OpenStack. Hi, my name is GaGaCiatari. I'm working for the open source forum office of Nokia, and I've been working on OpenStack like since three years ago, and lately I'm active mostly in the Edge Computing Group. Okay, hi, everyone. My name is Chihui Zhao, and I'm from China Mobile. I've been working with OpenStack for two years. It's not very long, but my whole work is all about OpenStack, and currently I'm also working in the OpenStack community and OPNFV community. Thanks. Hi, my name is ShuChuanHuan. I work for 99Cloud. I actually, I work a long time in OpenStack community. I, my first job, I joined the community since 2011, and I then moved to 99Cloud. Right now I'm also the TSC member of StarlinkX. I focus on the Edge Computing area. Hi, my name is Shen Wang. I work for Intel for almost 15 years. So I used to work on virtualization, so also OpenStack. So right now I'm working on networking and storage, including SAF, ONAP, and Edge Computing like StarlinkX and Aquino. Thank you. Thank you. With that, let's deep dive into some details about the working group itself. So we've started, under the OpenStack Foundation umbrella, we started to work on edge-computed related topics, well, two and a half years now, or maybe even a bit longer than that. We have a foundation top-level working group that's exploring the area of edge computing. The focus of the working group itself is really to gather use cases, understand the requirements of these different edge computing use cases, and work on reference architectures, and also testing them. We are working together with both OpenStack Foundation projects and further adjacent communities within the ecosystem, and the working group's focus really is a bit more higher level, so we are not doing the coding ourselves, but we are trying to make sure that all the projects who are looking into the artifacts that the working group is generating, they have a good idea about what edge computing is and what direction they should be moving when they are working on coding and testing an integration type of work. The working group itself is not focusing on any industry segments specifically. In our experience, edge computing, at least at the starting point, is was more telecom and operator heavy, but our use cases and our work is not focusing only on that, but we are really trying to look into all industry segments and learn from all those use cases out there. The working group is also working on white papers. The slide has the link to the first white paper that the group has produced. The white paper is also available in several languages besides English, so you can access it in Chinese. The link is on the slide, and we are currently working on the second white paper, which will talk about reference architectures and the testing work. Just a quick reminder that this is open source, so we would like all of you to participate, and during the panel, we will talk about the working group activities, both the global working group, and we also happen to have a group locally in China who are focusing on edge computing, and we will talk about that here as well. You can find information on the slides around how we are accessible and where all our resources are. So with that, I would give the word to Gehrge and David first to talk about the reference architecture and testing work that the global group is working on. So we've been having two different test efforts go on right now. One model being the distributed control plane, which has been primarily implemented by the Starling X folks. I've helped them a little bit with deploying a workload there, and the other would be the centralized control plane basically sands the large and medium edge, so basically central control plane with compute nodes at the edge is the model I've been working with. In both cases, we've been working with packet.com, who was kind enough to provide us the testbeds. One of the deal breakers has been with the piece I'm working on is runs on triple O and requires out-of-band access, so that's got me a little hung up, but the other model we actually have working, and you can touch it and try it out for yourself if you wanted to see it. Gehrge, you want me? And it's important to mention that these architectures are so-called MVP architectures, so these are, let's say, the first steps for edge computing, and we are working on figuring out how to evolve these architectures, how to add more features to these architectures, and this is an activity that we are having now. Actually, so as part of this, we'd like to get some questions to dialogue going, so anybody has questions, feel free to speak up. If you could just give a brief synopsis of what specifically is different from centralized to distributed, just so that everyone's... Excellent question. Okay, thanks. So distributed control plane would be, you would have central keystone, and at the edge, you would have an additional keystone, and those two databases would be synchronized. The other model would be that you have a central data center with a central control plane, but it doesn't necessarily sync to the edge node, right? So in my case, it's just compute service would be running at the edge. Anything to add? Yeah, and the main difference is how these architectures are tolerating the network partitioning, so that's an important factor when we are talking about edge computing. Do we have any more questions to the panel at this point? Sorry, that's a small edge, how to define that? What is the memory and the CPU, or how to define that small edge, yeah? We have a table somewhere in the Viki, I think about these sizes, but I would define small edge as a one compute node big thing. Okay, just one compute node, but the memory is same as the large or mid-air edge, so what? I think it can be smaller. Okay, but what's the difference between large edge and the small edge? So the large edge is really a data center, so it's like tens or even hundreds of nodes. Medium, I would say it's around one rack, meaning like five to 10 nodes, and the small is one. Okay, got it, yeah, thank you, yeah. Anyone on the panel disagreeing, get it at this point? So if you look at the Viki for the working group, you can see it's kind of t-shirt sizing, what's small, medium, and large, what those things actually mean, as far as number of nodes and what kind of metal is there for RAM, CPU, that kind of thing. Any further questions from the audience? Okay, so at this point, I mentioned that we have a group here in China as well that we formulated around a half year ago. I remember talking to Shane and Xu Chuan at a meetup in China in April, and hearing about a lot of activities in China, and we talked about having a local group here just to kind of make sure that the language barriers and time zone difference is not in the way of progress. So I would like to ask Shane and also Xu Chuan and Chewie to talk about the local group here. Okay, I'll give a brief introduction about the local edge group. Actually, we have early this year, we discussed to form a local edge group during the Hexen, and then we start to get to work. And Shane and myself and Chewie, we are rotation to host the local group edge group. Right now we have a retail group. There are maybe hundreds of people in that group. We have every Thursday afternoon, we will have about, it's 4 p.m., 3 p.m., right? 3 p.m. we will have a weekly meeting to share the latest use cases or technology about the edge. And we have, like the user's perspective, we, China Mobile will share their requirement about the edge. And like the Intel, it will share like the open source technology such as the Openis and the Starix. And we also invite other companies into this group to share what they did in the edge. Right now, as we see today also is the Shanghai Import Conference. And Shane has bring us a video about the edge computing use cases about just intelligent glasses. We will show that the video to see how the edge computing is actually happening around us. Yeah, so we realized the edge computing is coming. And this video is I collected from the local TV station for the local. And it's happened today. So you mentioned about the architecture of the control plane. So are you discussing the architecture of the data plane? How the data should flow between the edges and from the edge to the center? Is it within the scope of the working group? Or is there any architecture design for that? We are discussing only data which relates to the infrastructure like images, volumes and stuff like that. Nothing about the application level handling of it. Are you talking about workloads? Yeah, I'm interested, especially in the networks, how the networks should be connected between the edges. Or is it just a group of small open stack cluster connected with IP VPN or something? So what's the architecture design for the network side? It is something that we still need to figure out. So that's on our list for the future. They're like the exact detailed design for that. But if you have the solution already then. So I know on the small edge, it's just neutron agent running on the node and the service bus. That's basically the only thing talking to the control plane. But depending on the model, the edge model you're going with, you may have a full network overlay. I'm not quite sure how Starling X does theirs. But that would be something to look at is how Starling X is implementing their networking. Does that help at all? Okay, thank you. Okay, we can go back to the... Yeah, that is to say the security person in a subway station to wear the glass and to monitor the population in a station. And provide some like prediction or warning. Or even the camera in a station can help to detect the file. It's such like that. It integrated 5G and edge computing. And this technology right now is for Shanghai import X4. Yeah. So we realize the edge computing is coming. And this is a word set. There's no one size for all. So we are not focused on Starling X. Starling is also part of it. People want to use different technologies or different solutions to solve the edge computing problem. So we have Starling X, we have OpenStack, we have ONAP, we have Aquino. We also have proprietary software or proprietary solution. We also have China Mobile Sigma solution. So we want to use... Because we know the time zone issue and the language issue. So we want to gather the people in Asia-Pacific, especially in China, mainland to speak Chinese, and to share the use case in the edge computing and also share the technologies, share the pain points, share how to fix the problems in the meeting. So we formed the local edge computing working group on the OpenStack Foundation to discuss all of the open source solution. Yeah, that's our purpose. Yeah, we also encourage people to join us and even you can listen to us, it's a long problem. So if you can share more, that should be better. Yeah. Yeah, if you have interesting to draw this group, you can go to scan the QR code. I can invite you guys to join this group. And we just mentioned Qi Hui from China Mobile. So next, Qi Hui will have to introduce the China Mobile's thinking on edge computing. Okay, okay. So thanks for mentioning Sigma. It's like a China Mobile's past platform which connects the, I say, the network and also the cloud platform. But I don't know very much about that platform. So if you are interested in that one, you can contact me and I will do the introduction. And also about the edge working group Asian part or what's the name of our group? Sorry. Okay, okay, whatever. Okay, so we actually talked more about the vertical industry solutions, like that glass and also maybe Starring Axe is doing some project with a shrimp factory thing. Yeah, that's very popular. So we saw that a lot of vertical industries are using the edge computing, but it's only for like the on-prem solutions. It's case by case. So we think that maybe for the future or maybe for the next year, we could move forward like from the case to case, sorry, the specific use cases to some like general edge cloud. For example, we can focus more on the collaboration between the clouds like the central cloud and edge cloud, edge cloud and edge cloud and network and the cloud and things like that. And the second thing we think that maybe our group want to focus more is like the unified management of different clouds because we think no matter you are the provider of the cloud or the network or you are the user of the network and the cloud, as long as you are using the multiple cloud, you have to like have the unified management things. So maybe from the user side, we want to focus more on that two part. And also we are trying to dig more into the ability like the path abilities and status abilities on the edge side. So maybe that's our future plan or something like that. Okay, thank you. As you heard, she always said about he's thinking about the edge computing. By the way, I mentioned we also have another panel to inviting like the China Unicom and China Telecom. They will sit together to talk about the edge computing efforts and what they have finished in their job. So welcome, you guys can join us. The panel, it should be tomorrow at four PM, right? Yeah, you can check on the schedule. Sorry, I forget the exact schedule. Right now, I think I can pass the phone to the audience to see if you have any questions about the to the China specific group, edge computing group, anyone have questions? It's not specifically for the China group, it's more in general, the picture on the slide shows sort of a distributed, how you would distribute and federate a control plan using OpenStack for edge architectures. Something like styling X takes a different approach in that I will just use OpenStack wherever I like and orchestrate my applications on top of them. I just wanted to sort of understand what the viewpoint is from the panelists around trying to move the complexity of orchestration down into a federation layer as is depicted on the picture versus simply just deploying lots of OpenStacks in different places and then using something to orchestrate on top of them. I don't know if anyone wants to put opinions. Yeah, I can answer you from the styling X point of view. As you know, styling X, there is a feature called a distributed cloud. It can federate actually different sub-cloud and sub-cloud can report like a monitor data or alarms to the central cloud and central cloud can get all the data and it's convenient for the operator to monitor all the edge nodes and together with the central nodes. So that's the mechanism how the styling X implement. So the prototype I've been working on is using a distributed compute node which is a red hat kind of thing and there's only one cloud, right? So all the compute nodes are essentially part of the same cloud. It's just in different availability zones. So you could have one orchestrator sitting at the central cloud using Mistral or whatever it is for marshaling workloads to different availability zones is how that would work. Does that help answer your question? Any further viewpoints from the panel or any further questions from the audience? Who's looking to do edge computing or is making forward progress into doing edge? What kind of workloads are you guys working on? Or use cases rather? So we are focusing at the moment mainly on the telco side. Lots of the work going in at the moment and especially on the glance side is having multi-stores kind of stably deployed across so that we can handle the image management centrally and make sure that we don't have same image needing to have a million different image IDs because every single site needs its own. And that's one of the benefits. Having that one single deployment instead of having a multiple stacks around is that lots of these resources you can manage them through one single class of pain instead of needing to handle that outside of the open stack. Obviously we have lots of work to get there and we have lots of kind of image management and the store management side still unimplemented and work in a process but we as a Red Hat are working a lot at the moment to just build our own basically reference architectures and we are working together with our partners to kind of get that moving onwards and have those actual production deployments going out and getting the feedback loop back and seeing what we actually need and what are the important bits to move onward. Which brings me actually quick question about your reference is taking into account that Clans Registry has been deprecated for last almost two years now and would have been removed on train if we just had the time to do it. So I'm just wondering what your take is on that because it seems to be still part of your MVP. I actually missed what you said, you said it was deprecated. Clans Registry was deprecated. Distributed Clans Registry was deprecated. Clans Registry was deprecated. That'd be a question for the Starling X guys. Well I think we just kind of missed the memo around we should remove those from the diagrams while we'll work on that. So come to the PTG session on Friday and make sure that we cross it off and let us know what's the new Registry stuff in Clans. Any further comments or questions from the audience? If not, I would just ask a probably last question from the panel, I'll check the time. Just to kind of what do you think is the what do you think the use cases are and activities that the working group should focus on everywhere, globally, locally? So I think we've got a good start on implementing some of the MVP. I think we should carry on with that. Once we have functional MVPs, we should start trying to implement the actual use cases. And I started doing that with the caching one but I like to see, you know, do some IoT and interesting things like the video. We just saw it would be great if we could get those kinds of workloads running on top of the MVP would be pretty compelling. I would say three things. One is getting the verification of four MVPs. What you are doing, that's very cool. Other thing is stabilizing what we currently have. So adding more test cases around, for example, Keystone Federation and the features what are new and we are using. And the third one is to add new features to the current features that like looking into like how the networking should be figured out. How do we do the provisioning of the edge clouds and so on and so on. Yeah, regarding the China local ecosystem, I think the working group should focus on encouraging more users to talk about their thoughts and their use cases and we can gather this material and maybe write a write paper and properly summarize all these use cases because there is input from the end user and we will know what software we're going to implement based on those truly true requirements. And then I think right now we are from the short term, we are doing the second write paper. I think that is a really good thing to summarize our ideas. I also encourage the people to join these efforts and if you want to contribute code, you can just go to OpenState Project but if you don't know how to write code, please come to contribute the document. Thank you. So far we shared the Kubernetes, Kuba Edge and also Acrena Openness and also Sigma, StarliX. So I think with the two focus areas we are working on. One is to take a sharing to get more technologies shared on our forum. It's a so-called forum. And the other thing is we want to hear more use cases from the industry so we can collect the requirements for different solutions to solve the problem. But in the end, I hate to say this is China user group. I want to say we are just a part of this OpenState Foundation working group. I hate to say this is China, China only. We just want to encourage more Chinese people to join us but I remember, I still remember yesterday as one company joined OpenState as a gold member but his presentation is in Chinese but at the beginning he said my English is poor but I try to learn more English. So I think the language barrier should not be a problem so I encourage you more. Thank you and with that unfortunately we are out of time. I just would like to spend 30 more seconds on reminding people that we have a forum session around use cases and reference architectures so we would like to learn about your use case. We would like to get feedback from you around the reference architecture work and what your view is on edge computing architectures out there for different use cases. So this is at 4 p.m. this afternoon. The starting XP-TG sessions are Wednesday and Thursday morning and the Edge Working Group is also have a full day at the PETG on Friday. So if you're interested in what we are doing and would like to participate in activities please come and find us at these sessions or well just around the conference this week. And with that I would like to thank our panelists for the panel and apologies for all the technical difficulties that we encountered on this Monday. Thank you. Thank you.