 All right, give me a second. So I think Alexis is currently wrapped up on another call. So he may be running a little bit late. Ben can't make it today either due to a staff meeting. I see Brian Grant is here. Is Brian Cantrell on the line? Is Camille, don't see her yet. Jonathan Boll is here. I see Jonathan. Hello, Jonathan and Quinton. Yep, I'm here. Oh, there you are. Okay, I couldn't find you. Thanks, Quinton. All right. So I got Brian Grant, Jonathan Boll, Ken Owens, Sam Lambert, Quinton Hull, or online. Alexis should be hopefully joining us shortly, but in the interest of time. We'll kind of get started. Moving to slide five, the agenda. Yep. So quick agenda today will be essentially. Welcomeing a new project in the sandbox. A little discussion around the safe working group. And we have two community presentations today from. The Alibaba folks. So one is dragonfly. The other is open messaging. We'll discuss a little bit about the community backlog of presentations that we have to review and then open it up to any questions from the community. So slide six. This is a very simple welcome to the TIKV community to CNCF. They entered the sandbox on August 28th. This is also our second project that was born from China in CNCF, which is kind of exciting for me on a personal level and kind of shows how global our community is. But other than that, welcome. TIKV and looking forward for you to grow within the foundation. So we're going to move on to the next slide. Slide seven. Is kind of our backlog of presentation proposals that we're asking for review from the TLC contributor and wider community. So two big things. Cortex and build packs have both secured to see sponsors and are kind of working through the project proposal process to enter the sandbox. And then key cloak coming up for presentations in the future. Next slide. This is something I'd like the TLC to discuss. I don't know if we'll have enough time to kind of go over this and we may move this discussion email, but we have a handful of projects that have asked to present themselves to the TLC and wider community. We have to essentially make a decision on whether we would like to invite them or not. If they're good fits. I don't know if any TLC members have taken a look at these presentations. I don't know if there are any opinions of which ones we should at least include and kind of start from there. Brian or any folks have any comments on these particular ones of whether we should include them or exclude them from presentations. It would be good to make a decision on these. Some of these have folks have been waiting for a while. I've had time to take a look at them. I know they've been waiting for a while, but July and August were just pretty brutal. So. There were a lot of things going on. I don't know. No worries. Anyone else. Chris. Yeah. Are we, is it just a scheduling issue that we don't have enough slots for them or a person of whether they will present at all? It's, it's, it's the ladder. So, I mean, we have slots as long as you push them out. It's, it's, it's both. I think I would like to all, in addition to taking a look at these, I would also like to come up with an agenda to take care of some non-project related issues. So we need to figure out how much space to give them. Yep. So any other comments here? Just, just to repeat what I mentioned to you before, which is that I think we need a blanket inclusion exclusion principle, especially for the sandbox, because at the moment our exclusion criteria are essentially non-existent. And so if we follow them, then, you know, all of these should get to present and potentially get into the sandbox. And if we're not going to do that, then we need to come up with a clear distinction. Yeah. Yeah, I definitely, I agree. I have some thoughts of basically, you know, at least one TOC person needs to approve a project to be presented at least as kind of a bit of a gate, but we could kind of move this discussion onto the mailing list to kind of firm this up since the interest has definitely spiked a bit in the last couple of months. All right. So you will move that discussion to the mailing list. Moving on next slide. Governing board working updates. So just working group updates. We have four official working groups that meet. More importantly, we have one new working group that is being voted upon related to the security space. So I kicked off that vote this, this morning. So I appreciate any feedback to the safe slash security working group. You could see their full project proposal linked off that messaging thread. Moving on. Ah, okay. Now we got community presentation. So we have two of them. So we will have Alan son from the Ali team present the dragonfly. So are you there, Alan? Hi, hi, Chris. All right, good to hear from you. It's now your turn to drive and appreciate you, especially if you're joining us at the late hour over there. So all right, take care. Okay. Thank you. Thank all of you. And I'm with the dragonfly team to present the dragonfly. I'm Alan from Ali Baba group. And I'm responsible for the container ecosystem in Ali Baba. So what is dragonfly? Dragonfly is an open source P2P based image distribution and a file system, distribution system. Dragonfly is an intelligent P2P based container image distribution system. And it provides a native image distribution solutions for cloud native applications. And currently we can integrate integrated dragonfly into these Kubernetes very natively. Okay. And here we list some features of dragonfly. In cloud native, we, dragonfly focuses on the image distribution part. About the distribution part, we can list three kinds of distinguished features. So first is the efficiency. We provide P2P based image and the file distribution. We can provide another passive CDN to avoid a repeatable downloads to reduce the cost. And the second is on the flow control side. We can provide a task level and a host level net speed limit. With the net speed limit, dragonfly can protect your host disk and protect you from high used input and output. The third one is security. Dragonfly can encrypt your image distribution when image transmission. Okay. And the last but not the least, dragonfly is very easy and very simple to use. We can say dragonfly does nothing invasive to all kinds of currently consistent technologies like the most popular container engine docker, like another container engine open sourced by Alibaba, which is named on container. With all these container engines, you can pull container images with dragonfly as usual. Okay. So if you, if you match issues, we can see dragonfly can cover such kind of use cases. If you want to increase image downloading speed, if you want to reduce your bandwidth cost at least 50% off. And if you want to distribute a large container image, maybe these images are larger than 10 gigabytes. We can say when lots of traditional applications are moving to cloud native, at first at the very beginning, their images are very large. So dragonfly can improve the distribution efficiency. Also, if the cluster is at a very large scale, I think dragonfly is your best choice. So dragonfly also can show the stability of business services is not affected by the downloading task because in the production, we have met lots of issues like these kind of scenarios. We can also prevent data from leaking in transmission. Security is another matter we should take more care of. Okay. So next slide. We are very happy to see currently dragonfly have lots of production users like China Mobile, which is the most famous operation company, companies in China. In the data center, dragonfly serves more than 1000 nodes and another DD. Also, iFlight Tite, which is the most famous speech intelligence companies in China. And we also are very happy to have another global customer, which is LaLida in Southeast Asia. And Kerma, UC browser and Map.com. And also Alibaba Cloud, they are also using dragonfly to speed up the distribution to ensure the distribution transmission. And in the anti-financial part, we are very happy to see anti-financial, they adopt dragonfly in tens of thousands of nodes in the data center to provide financial services globally. In addition, Tainan and Tema and Taobao, they are also adopters of dragonfly. This is all about the production necessary. Here we will tell all of us a little bit about the architecture. At the first level, we can see the dragonfly controller. This is the controller, which takes the advantages of the dragonfly's API to manage what is working inside of dragonfly. At the second level, we can see supernode. Supernode is a node which manages the distribution part in the data center or in zones. At the third level, we can see the host. On the host of node, we definitely have container engine. There is a DF get as a proxy and the DF demand to manage how to put images in the P2P network. Next slide. In the supernode, we have provided the API and in the decoupled scheduler or models, we can see the P2P scheduler providing lots of algorithm and lots of management. In the CDN manager, we can manage the image cache locally. In the transmission, we can limit the rate of the image cache and ensure the security. We also provide the pre-heater functionality to all the deployment of the cloud native applications. In the file manager part, we can provide the functionality of GCDisk and also an interface for all kinds of file systems to cache the data. Next slide. We will introduce the briefly procedure of pulling images by Dragonfly. As usual, the container engine will send a request to pull images from the node proxy and the node proxy will send pulling requests to the supernode. If the image does not exist on the supernode, definitely we will download from the registry to cache the image locally. The supernode will reply the nodes. Those details about the peers which already has the image blocks and with the fifth part, the proxy will transport the blocks among all the peers network. At the last step, we will finish the whole calling with all blocks downloaded. Here's the graph. Next. Here's the project history. At about the June of 2015, we inception of the project in Alibaba. Maybe 10 months later, Dragonfly has already become a very fundamental infrastructure technology in Alibaba covering the whole Alibaba group. Last year, at November, we decided to make it open-source. With open-source, we achieved a lot of milestones like at June this year, we've already exceeded 1,000 stars on Github. One month later, we get 2,000 stars. Currently, we are very happy to have more than 20 adopters, not only in China, but also global adopters like Lazida. With ecosystem building, we integrate the point container and the docker. For the CNCF part, we have already cooperated with the hybrid project. The hybrid project is responsible for the image storage or the image assessment. But Dragonfly can take over the distribution part for the CNCF. Also, we try to make Dragonfly to be deployed via here and the others. Here is the community. Currently, we have more than 2,000 stars. We provide a user group discussion group and develop discussion group. The following is the maintenance details. We have one internal team in Alibaba, and we are very happy to see we have already one outside maintenance from eBay China. They are also adopters of Dragonfly. On the contributor side, we have one team working on Dragonfly. Some engineers are part-time. We also have another four-time engineer focusing on the open source. Outside eBay, Tuner, and May 12 companies, they are very active contributors of Dragonfly projects. We already have one outside collaborative. Also, we are very happy to see the TOC members sponsor this project, Dragonfly, to the same box. Okay, next. Here, we will share a little bit about Dragonfly's roadmap. As as November this year, we will make Dragonfly super-node deployment to use here. And the DF get DF demon with demon stats. With integration, with Haber, released general version before November this year. And before in the late of 2018, we will make security a lot of improvement. Support the private content image or authentication in super-node API. And we will also support different encryption algorithm in data transmission. With the efficiency patch, we will definitely improve the dynamic rate limiting when downloading. And we also to provide intelligent scheduling algorithm, improve distribution efficiency. And then we also will investigate the possibility to integrate the IPFS. Open this part, we will make the design of Dragonfly to be stable. We try to make Dragonfly in the cloud-native production with a scalable component and makes the component of Dragonfly more user-customized. Also, we will try to reflect super-node in Golang to attract more developers. With the scalability, we will simplify the complexity and with the scalability, we will make the class of super-node to decrease the possibility of failure. What do we want from the CNCF? We should say at first, we need to improve the improvements of the Dragonfly. And second, we want the CNCF to allow more people, more adopters to use Dragonfly. Then we can attract more active contributors to this project. So, Dragonfly can grow very fiercely. That's all for my presentation. Any questions? Here is a link of the Dragonfly on GitHub. We sincerely wish TOC members to sponsor for Dragonfly to enter sandbox level. Thank you. Thank you, Alan. Does anyone have any questions for him? I have confirmed that I think Jonathan has already supported us. So, we have one TOC sponsor. Any questions? Hi, yeah. I have a question. What requirements do you have of the operating system and or file systems that Dragonfly operates on? Okay. So, what kind of file system we've already supported to cast the data, right? The question is that? What kind of limitations are what file systems and Linux distributions this will and potentially container runtimes this will interoperate with today? Okay. So, currently, we can support all kinds of container runtimes. We can integrate with Dragonfly with container D, Docker and punch container. So, currently, the system currently cast an image is only supported locally, local storage. And in the future, we will support more file system to cast the image in Superno. Okay. So, it just uses the local file system and it doesn't matter what the specific file system implementation is. Yeah. Currently, that's more file system support is in our roadmap. How much? Sorry, go ahead. I already asked one question. I was going to say some people do things like this with BitTorrent. So, what is the what are some of the advantages of this over just using like a BitTorrent type model? I think we are going to make our P2P scheduling as a reason more pluggable. And currently, we only support the protocol which is implemented by Alibaba. I think this is more native to the scenarios we met. But in the future, we will try to support BitTorrent. Yeah. Is it correct that this Dragonfly has integrated with caching as well? It keeps track of what image layers are present and which ones are needed. Okay. Beg your pardon. Does Dragonfly keep track of which image layers are present and which ones are needed by the running containers? Or is that a separate concern? I'm wondering if I understand your question. You are asking whether the container runtime can get aware of which layers are on the node, right? Just slide 19. Step 3 is cache the image. Is that done by Dragonfly or the existing image management performed by the container runtime? Container runtime? Yes, the image layer blocks is in the runtime, right? It's not that critical. Never mind. I'll take a look. Thanks. Alan, one other question here. The notion that Dragonfly is intelligent is fantastic. I'm curious if you can elaborate a little bit about any level of intelligence that Dragonfly might have by the way in which it would restrict distribution on a per node basis. If certain images may never be scheduled to a certain node based on scheduling restrictions or certain things you would find in a manifest file, does Dragonfly account for that or does Dragonfly just distribute images as much as it can without necessarily understanding when and where a given image might need to be pulled? Okay. Frankly speaking, current Dragonfly only supports to distribute the image layers not for the manifest of the image. So if you want to download the manifest of the image we still need to communicate with the registry currently. Any questions? We will continue discussion on the mailing list but thank you Alan for your time and if there's any questions please follow up on the mailing list and Alan and the Dragonfly team will do best to answer them. Thank you. Thank you. Next up we have Open Messaging. Is someone from the Open Messaging team there? Hello. We hear you. You're a little bit light on our end but try again. Can you hear me now? Yeah, we hear you now. All right. Do you want to steer the presentation or are you okay with? Next slide. Thank you. Good morning ladies and gentlemen in the next 10 to 15 minutes I will give a presentation on Open Messaging specification. I'm Jerry, one of the TAC members of Open Messaging and I'm really sorry to tell you that Ron Gosling does not feel well so I will represent him to report today. He's sitting next to me anyway. Next slide. Good morning. Sorry. It's quite late in Beijing now and we can barely keep our eyes open so if there is panic it's not clear. No worries. We'll follow up in the email list and we'll start with the presentation of the Open Messaging in the next 10 minutes. I will introduce our group and the motivation about Open Messaging. I will then describe Open Messaging Open Standard and Community Development in details and the last two parts are our roadmap and the future plans. Next slide please. We are from Apache Rocket MQ in the Alibaba Group and is the very first Apache non-hadoop ecosystem top-level project of China. Rocket MQ is distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability. Rocket MQ robustly provides stable infrastructure with a transfer throughput and one-trillion message just in Alibaba's November 11th shopping festival and has also been widely used in thousands of companies outside of Alibaba Group. Until now Rocket MQ currently has more than 5,000 stars and more than 2,400 folks on GitHub which we believe is a hot proof of our capabilities to run the thriving community. In the past 10 years we have focused on providing messaging services to traditional users however there are increasing demands for cloud users on cloud-native messaging services. In this world-changing period we have been facing many technical issues that make it so difficult for cloud users to access multi-messaging platforms without barriers because there's no cloud native messaging standard and that is why we present Open Messaging Standard. Open messaging is a vendor-neutral and language-independent standard provides industry guidelines for areas of finance, e-commerce, IoT and big data and aims to develop messaging and streaming applications across heterogeneous systems and platforms. Compared to other protocols like XMPP or AQP or MQTT or JMS etc Open messaging is not limited to Java environment and wire-level protocol and not like other standards we have specified guidelines for load balance, fault tolerance administration security and streaming features which perfectly support the need of modern cloud-native messaging and the streaming applications. Next please. Well, in order to create a cloud-native messaging standard and reduce developer access costs, Open Messaging has several spotlight features as shown in this slide. Well, first, Open Messaging not only provides a large range of support including finance, e-commerce and IoT but it's also programming language independent. And secondly in big data ecosystem Open Messaging also provides streaming and connecting capabilities to exchange data with other systems. The third Open Messaging is a standard for cloud-native applications we define a specified URL access to the cloud vendors and a load-specified messaging driver. And last and very importantly Open Messaging does not limit the implementation of vendors but provide a standard benchmark for developers to evaluate the implementations of each vendor fairly. Next please. We have been included in the CNCF landscape and thank you very much for your appreciation for that and we sincerely hope that we can work together to create a more perfect ecology with a CNCF-hosted project. Next please. Well, as a cloud-native messaging standard, Open Messaging is not only a very useful compliment for the ecology of the CNCF but we can easily integrate with other projects already in the CNCF as well. Well, Open Messaging can be integrated with GRPC to provide asynchronous support. The second Open Messaging can be binded to cloud events standardizing all events in contents as well as transmission processes. Third, Open Messaging can be integrated with Prometheus as a connector. Finally, Open Messaging can also be integrated with Operator to make stateful messaging platforms easier to manage. Next please. Open Messaging the main model is based on Q-Model where Q is the carrier of messages. It is a logical destination that gives messages from the producer and then transfers to the consumer. It's not worthy that a Q should be divided into partitions where a message is routed to a specified partition by message key in the message header. And the main model also supports multi-operations. A message can be routed from Q to another Q or be filtered. Users can therefore combine various operations to meet various scenarios such as group subscription batch sending and so on. Moreover, Open Messaging supports pull and push model for consumers so it can be easily integrated to streaming solutions. Next please. Well, this slide describes our core contribution the Open Messaging specification which is also the essential value of this project. Open Messaging specification is derived from the abstraction of the domain model in the previous slide in which the schema is used to describe the main model of the message. It contains not just the data but the common metadata of a message as well. In the meantime, it's not a wear level protocol. Therefore, it has no limitation at all for vendors' realizations. Open Messaging also provides an optional runtime interface for binding the schemas to implementations. I think it's the first and very important step towards portability of a union message platform. Next please. On this slide I will introduce our community. While this page shows the operational data for our communities, so far we have released the 1.0.0 preview version and have already got more than 500 stars and 160 folks on GitHub, our group is currently composed of 70 SE members from four organizations and nine maintainers from six organizations and 15 community enthusiasts who are constantly contributing to us. Up to now, six versions have been released and more than 600 commits have been made. Next please. Our founding members include companies and organizations from a variety of industries. Alibaba is a Chinese company that provides C2C, B2C and B2B e-commerce services as well as cloud services. Yahoo! is globally known for its search engine and related services. VD is one of the largest right-sharing companies in the world providing transportation services to over a million users in China. And WeBank is China's first private commercial bank established. One of its major shareholders is Tencent. The last two Streamlio and Datapackline are two startup companies that concentrate on big data and streaming solutions. Next please. We also got universal supports from commercial and open-source vendors where Alibaba Cloud, Tencent Cloud, ChinCloud, RocketMQ, RabbitMQ and the Pasha have already confirmed their contribution willingness. AWS, Azure, Google, Kafka and Nets have also shown great interests. Our covered open messaging solution has already been commercialized over Alibaba Cloud and has a yearly revenue of more than 15 million US dollars. Therefore, we are confident that our project would definitely have a prosperous future in commercialization. Next please. We have also been recognized by many industry experts. Okay. Next please. Next please. Thank you. Our roadmap is divided into four phases and currently we have released the 1.0.0 version. We really hope to get support from CNCF and major popular messaging platforms and then we will try to integrate with Cloud Events. In Phase 4 we will focus on ecosystem building providing open connector and streaming specification. Next please. And in the last part I will introduce our vision in the future. We want to make open messaging a uniquely unified messaging bridge to connect different commercial applications and various big data streaming computing platforms. And at the same time we hope to get support from well-known open source vendors to create the complete messaging ecosystem. Next please. And in the future we also hope to fully commercialize open messaging over major cloud vendors to make open messaging a common messaging standard like Alibaba Cloud, AWS, Azure, etc. Ensuring that users have a friendly connection specifications of every cloud service providers. Next please. That's all my report about open messaging in the right corner is our Twitter QR code and our official website is openmessaging.cloud. We've been working very hard to participate in the course of CNCF and Linux foundation in China. We became platinum member of CNCF last year and become the top-level sponsor of LC3 and the Kupercom in Beijing and Shanghai respectively. The two flagship events for Linux foundation and the CNCF helped in China this year. Well, we had a discussion with Alex during the Copenhagen summit this year and he gave us a lot of great suggestions. Many thanks to Alex here. We're sincerely looking forward to you for your kind attention and endorsement and thank you for listening. Thank you very much. Any questions from the TOC or community? Yeah, I had one. It's Quinton here. Could you just clarify at what level the standard is? So you mentioned this is not a wire protocol standard and it's also multi-language. So it's not entirely clear to me for someone wanting to implement this. What exactly is the standard that they have access to? Where does that fit in the stack? Yeah. One of the CNC members of all the messaging and I will come to answer this question. Open messaging is an application standard schema and it's a specification for users. It's about data and users can connect with open messaging in transport or give some or we also we also we also we also made some we also provide some interface to as an optional to users to make a good branding and implementation but we have no any limitation for users or vendors to to implement that. I have a related question. I might guess that it's similar to cloud events and that the common metadata properties could be represented in multiple data encoding formats. How similar or dissimilar is this from cloud events since I saw cloud events was it supported cloud events or vice versa or something actually we have noticed cloud events in and just like cloud events and open open metrics we put a request to we want to make a poor request to binding with cloud events but we focused on the whole messaging or transport whole messaging field and open and cloud events more focused on the events or functions and other serverless computing we can provide provide a standard standard transmission for cloud events and we will once again share combining with cloud events and integrated with cloud events. I think it's a compliment of the since I have a project. Okay, thanks and the relationship to rocket MQ wasn't clear to me is this backward compatible with rock MQ or it's just inspired by rock MQ. Can you clarify that relationship between the two PMC members and work at MQ is an implementation of open messaging and open messaging and in the last in the next voting of rocket MQ 5 we will make full support of open messaging but in the current voting of rocket MQ we have been implemented zero three and you can find some implementation in my target hub. Okay, so it's similar to open metrics being inspired by the format. It's not 100% compatible but future rocket MQ versions will support whatever the changes have been made in the open messaging specification. I see Chris nodding. Thank you. And just one question why not Apache? Just out of curiosity. Open messaging is already under the Linux foundation auspices so I see. It's already neutrally owned. Okay, thanks. Any questions? And yeah, who from Google have you is involved with this since they were mentioned? So I can follow up with them. Sorry? If you know. Sorry, please. It was mentioned that Google Cloud expressed interest. I was wondering if there was specific people that I could, whether you were aware of who those people were. We are still communicating with Google Cloud leader and to make progress for integrate in Google Cloud platform but Google Cloud platform still evaluating our project and I think if we can evolve the UCSF we can we can make a good collaborator in the future. Okay, thanks. I'll track it down. Any questions? Hi, this is Colin from the NETS team. Yeah, it's kind of unclear as to what specific specifics in the specification make open messaging cloud-friendly or cloud-native? Yeah, our open messaging standard is cloud-native-oriented and if you can find a not only XMPP or MQP or other MQTT or some implementation of Kafka or RabbitMQ or RocketMQ didn't support current cloud-native cloud-native modern cloud-native period they cannot provide fully they cannot make the user access to it without a barrier and open messaging define specific file URL to access every cloud vendors and users can access open messaging using open messaging can be easily access access every cloud platform and we are open messaging cloud-oriented standard. Okay, thank you. Cool, any other questions otherwise I kicked off two threads on the mailing list so we can move to further discussions there which can be a little bit better. Thank you for your time so and hopefully you feel better too so sorry about that. So moving on just kind of standard links to our project review slash backlog moving to the next slide big events coming up hopefully we'll see many of your shiny faces in Shanghai on November 14 through 15 for KubeCon China it's our first event there so we're super stoked about that and of course we have our flagship event in North America on May 13 and then Europe next year May 21 through 23 so moving on next slide this needs to be updated but our next meeting will be the 18th and we'll be hearing from the NetData project so next slide thank you very much and hopefully everyone has a good Tuesday and thank you again from China who definitely we're staying up late so I appreciate the time and we'll try to do better on time zones in the future so thank you Alan and Jerry take care everyone Chris did Brian mention that he wanted to discuss some other procedural stuff we have time now if you want to do it we have seven minutes otherwise we also don't have very many TSE members here so maybe we should start that on the mailing list I agree cool sounds good to me we'll have more time next week take care