 started uh lexus are you there let's start slide five and go from there um slide five okay agenda blah blah harbour uh sandman harbour is in the sandbox now thank you very much to harbour team for that should we be seeing the presentation lexus i'm not seeing it uh sure we could share um let's see taylor do you want to share and drive yeah that would be great sorry they'd mean interrupt no worries give it a if i could interrupt as well um we normally skip past the agenda line but could i just call out when everyone's here on slide five that a deadline this sunday um august 12th for kubecon sub native con seattle so we're expecting six to seven thousand people at this please this is the week to submit and get your colleagues in your company or organization to submit okay all right someone needs to be muted cool all right go uh could go for it alexis all right thanks so we've got harbors come in welcome harbour and thank you for everybody who works on that next slide please sorry just just interrupt on harbour i didn't see a vote for that did i miss something it's sandbox yep sandbox got two sponsors oh so we don't vote on them okay cool worries uh the pr was there so but this this one does have a vote uh congratulations to primates that is uh i believe second graduated project um after a long journey in incubation and quite a bit of effort by everybody to get this thing into good shape uh making sure that the graduation process is very meaningful indeed uh this was something that we voted on there was quite a lot of consideration taken around around the project one thing that you'll be hearing about a bit more is open metrics which came out of primates originally uh but it's not a primates specific project it's a way of sharing the format for the metrics more widely so that other teams and protocols sorry how the project is being used by open sensors for example okay right we've got more things coming along and um so project with news brian cantrell have you been interested to the uh cortex people yeah i'm sorry i haven't and i've been on vacation for the last two weeks and i'm i'm going to be out for the rest of this week but then i will be back so um and with a queer schedule so i'm sorry that this has just been a brutal couple of weeks with me being out most for most of it great thank you um the next one is ti kv which i think has its sponsors a rook uh what is the toc feedback chris that we need to so the the rook folks are going to uh are asking to move from sandbox to incubation and we'll do a small presentation the next uh meeting uh you could take a look at the pull requests where they detail um them fulfilling the criteria and are requesting for feedback uh there so okay and then more importantly there's uh three projects that have requested to present um this is something brian grant kind of requested that we cover over toc calls before we decide to allow them percent or not so we have stream z habitus and we've met we necessarily don't have to make that decision today uh but um i'm pointing them out uh here for you to consider okay i'm surprised to see that the we've met guys put that in for the sandbox that's probably a mistake that should be incubation or uh it could be my fault or reading it but i'll hold it okay did you see anything submitted on we've scope by chris at all nope expect that to backlog is there okay and we've got a couple of presentations today as well so are there dates assigned to those uh backlog items uh the the dates everything backlog is scheduled to present uh everything in request to present is not scheduled and requesting uh toc feedback to either allow them or pass okay so the dates are in the spreadsheet or uh the dates are on the github repo if you go to github.com slash cncf slash toc you'll see them scheduled i could add the date i could add the dates for the next toc meeting too they're just in the read me yep okay thanks okay uh right next slide please so this is a quick read out from me on the governing board meeting um i'll try and keep this fairly short because we've got some presentations today uh we can follow up on the on the public toc list uh if people want to discuss any of this can you go to the next slide please taylor so one of the things that came through strongly in the request for feedback from toc was that we want to build stronger bridges with the end user community now that it's becoming bigger uh these are some of the things that we asked for uh there was there was a good response from the gb on this request uh that it was read very much as a a sort of set of requirements for how the gb and the end user group could kind of go away and come up with ways interacting with the toc so what we're not presenting here is a solution more of a question to the gb and to the end user group how can you help us to find out more about the projects okay okay next slide we have some talk especially kind of in the water cooler moments as well around the future growth of the cncf and the importance of retaining a high level of clarity around what we're doing why we're doing it what projects are for and how they fit together so um you know i have personally expressed the concern that the the landscape is still being you know off sometimes presented in its in all of its true glory consisting of everything that might have something to do with cloud native and here's a link to some commentary on on twitter from from the notorious simon talking about that and then um you know dan pointed out that we have the trail map which is getting good traction as more of a opinionated guide that actually refers to the projects in the cncf i think it's very very important that we that we continue to give people opinion uh around um what we're doing and don't get too sprawly so we talked about some of the potential threats there including that as kubernetes gets more and more popular uh there is a potential of having x for kubernetes for really any value of x for example i mentioned the project cortex which i'm close to because you can add a weave you know that's a splice in between the primates and kubernetes so where the what how do you deal with all of those things so on the next slide please here is a proposal so this is not a formal proposal at this point but it's something that i thought we could discuss as as the year goes by the idea here is to form categories within the cncf of um clumps of related activity um e.g. security or observability and and brian grant also pointed out that we could encourage some some of the projects to think of themselves more as a platform a sort of something which has a conservation of smaller projects around it uh there's also the potential to have verticalization in the future and um there's also different classes of project like xcd you know we we've previously talked in the toc about the importance of xcd coming in and being a stable stability first component rather than trying to express great velocity some of the things that might be provided by by different categories would be white papers a category specific landscape for example the security landscape or the serverless landscape that we've already seen and serverless working group more patterns that are focused um reference architectures around the category and obviously working groups this would also give us a mechanism for migrating the working group model to something that i think has a bit more long term of value so um i'll open the floor for a few minutes to this does anyone else apart from do see who's commented on uh i am there thanks michael uh want to comment on this it's just an idea so what do you think seems very reasonable to me we need some form of hierarchy otherwise a flat structure of everything is just not reasonably now navigable by human beings what i think what ends up happening is you end up missing parts and then things aren't categorized in the right way and i think the end user doesn't understand how you end up using some projects as well so this would provide that clarity yeah i think um it's worth working on uh this further to see if we can come up with a more concrete proposal you know in some areas there will be a high degree of affinity between the solar system model and the categories um like monitoring and Prometheus related things for example in other cases there may not be that much alignment between those approaches uh like kubernetes as you said and pretty much anything um so we might want to play with that a little bit and see if one or the other helps more or if we need some flavor of both approaches but i definitely need to think something or multiple some things in this area is going to be needed as the foundation grows yeah yeah i think something like this especially if um we have a proliferation of new functionality on top of kubernetes i think that's going to present all kinds of challenges which we've managed so far to shy away from which i'm quite glad about but i'm not sure what that's something we can bestow in definitely um there was quite a bit of discussion about this at the gb i should also add that i think this um category proposal in its very raw form was well received by the gb but i do agree we should flesh it out more um what is a good way to flesh this out more i should do the classic build a document like we did with the sandbox i actually think um taking the existing projects and uh some of the perspective ones from the uh from the upcoming uh presentations and also from the landscape or just other projects that we know about not that they necessarily would come into the cncf but just say well we took this set of 50 projects and we applied this approach this is what it would look like i think that would you know having some concrete examples would help a lot yeah it sounds like something the working group might want to tackle uh yeah i'm not a fan of like making a big working group or whatever i would just say we should get three volunteers or something to actually go mock up some proposals chris what do you think yeah let's create a mailing list and get a few volunteers to just hammer away at this hey chris um i like this as ken so i'm on the we've been kind of looking at this in the reference architecture working group and so um definitely happy to take that into that discussion and others can join and lead that if they want to drive that yeah that could be easier come back with a list of potential categories and um which current projects we have which would map into those and which ones are in the pipeline and just start from there and then maybe identify one or two using the model that brian described yeah let's do that and the objective here alexis is to offer a clarity correct i mean that's the overall objective is to allow people then to make sense of the landscape i think um to make even even if you consider the landscape to consist only of the projects that are current that are actually in the cncf um then i think the number will be growing and uh we're really at a point now where it's going to get uh if it gets much bigger it will become very confusing so i understand but i understand that but i just want to make sure that we keep that that objective in mind because i think what we don't want to do is end up in endless adjudication over taxonomizing a project in one spot or another um because it that there's a perception that to be taxonomized one way means one thing and another thing means another way means another thing i think we want to make sure that the emphasis is on clarity this is not a value judgment that we're really trying ideally you want to have a taxonomy that allows projects to effectively taxonomize themselves um in a way that is accurate because what we don't want to do and i've already seen us do it a lot is where we end up in you know in a lot of adjudication um for these fine uh distinctions that don't necessarily have a difference indeed i agree with you brian i was going to make a similar comment i would actually suggest before we try and taxonomize anything uh we should actually write a very brief you know i'm thinking half page thing just making very clear what the goals are and then very clear what you know there may well be more than one proposed taxonomy i can imagine one by area and one by uh you know anchor project or something and maybe a combination of those and just just write those down and be very clear what they are and what the goals are before we start arguing about whether this taxonomy is better than that one yeah and i think we want to stray away from things like momentum and maturity and i mean we want to stray away from those kinds of attributes and really stick i think stick to those attributes which are going to allow for for greatest clarity um for those people who are trying to use these projects to understand them right okay so chris is suggesting that we continue the discussion in the cncf-reference architecture uh working sorry list um i take it everyone is able to access that you should sign up for it if you haven't already tell ken so ken will you come back for the next toc meeting with a list of potential categories from the architecture maybe even present the current working architecture v2 at the same time ken you might be on mute yeah i think that's perfect perfectly to come back with in two weeks thank you perfect yeah no problem let's move on to the next slide please um this is a recurring theme i brought up this with the gb as well you know the um it's really really important for the cncf to add value to projects i know there's many other things that we care about this for me is very important that if the projects are happy then other things will follow uh we had the discussion about the trademark thing that uh that came up around o'reilly and that's in you know the solution i hope is in progress um and uh yeah that's there's not much else to say about this slide so let's move on and this is something that a slide that i showed in uh kopenhagen i mean maybe you know another thing we could do off the back of categories is have some kind of deriset roadmap um this this particular table was um completely of my own creation and i emphasised that point when i presented it but it would be nice maybe to have someone with a more collective view um in the future but i think categories first all right next slide please okay now it's time for the xcd folks to present xcd which i'm sure we've all heard of who's there to talk about uh uh yeah here i'm gyho from aws can you guys hear me yes okay yeah let's start so i'm talk i'm gonna talk about xcd like how it's built internally like the architecture things and then i this is going to be very high level like for like only like 15 minutes and then please ask any questions you might have at the end of presentation so next slide please so xcd is consistent distributed key value stored mainly used as a coordination separate coordination service in distributed systems designed to hold small amount of data like that can fit entirely in memory although we still write to disk for durability so you don't want to store like all of your application applications data in xcd and it is quite popular like kubernetes relies on it and then we also have a lot of non-kubernetes use cases uh entity from japan uses xcd to manage their network infrastructure and then also uber is using xcd to manage their m3 time series database and then also recently brain brain tree replaced redis to power their caching systems okay and then next slide so this is how scd works in terms of kubernetes so kubernetes control plane interact with scd for instance kubernetes apia server persist or cluster metadata metadata in scd and then the kubernetes or node agent can subscribe to this information through scd and whenever changes happen in scd scd notifies the client which is kubernetes apia server so that it can keep the data up to date and next slide please and the scd is distributed for high availability while we prioritize consistency and then partition tolerance what that means scd provides one logical cluster view of many physical servers so long as the column is up scd continues to work even under machine failures so this redundancy provides us with full tolerance next slide so scd yeah this is our api so scd has a flat binary key space with no directory hierarchy so scd uses ranges to search for keys in an interval so this interval model support calling keys on prefixes as if from a directory and then scd list is not tied to any session or connection you can create like as many lists as you want instead of key having a ttl or list with ttl is attached to the key and then when the list expires all associated keys in scd storage are deleted also this model reduces people live traffic like let's say multiple keys are associated with the same list object and then when people live requests are multiplexed over a single gel pc stream like we can like make the stream like the multiplexing and stream like the broadcasting more like efficient and in addition they are also processed by the leader like we're not going through the rough layer so we don't have any like overhead like consensus overhead when we when we have a idling like least like request and then scd can also serialized multiple operations into a single conditional mini transaction so each transaction include a conjunction of conditional like guard so we can check we can do checks on key version and then modified revision or like value of the key and then we also have a list of operations to apply when all conditions to evaluate to true and then we have a list of operations to apply if any of the conditions evaluate to force and then this transaction make our distributed lock safe because access can be conditional based on whether the client still holding is locked this means this means that like scd server like reject the lock like election api when the client like is losing its claim on the lock so that can happen like due to like client error or like missing exploration event so next slide so we have a streaming rpc for watch or like least keep alive so like zookeeper or console or scd version 2 like can only return one event per watch request and then they require long holding over http and then forcing the systems briefly hold open a tcp connection per watch request but let's say you have a thousands of watch client then like we can quickly use up all the server socket and the memory resources so scd version 3 like instead of opening a new connection per watch request we register one watcher on a shared bidirectional jpg stream and then like this stream delivers event packed with a watcher id so that is like what multiple watch streams can share the same tcp connection and then this streaming multiplexing like reduce like scd memory footprint by n is on order of magnitude next slide so scd is distributed so we need replication protocol like which is roughed so scd server implement the roughed consensus algorithm so it is leader based leader is chosen by the followers and then followers for the proposal to the leader and then the leader controls like everything like leaders controls to what to commit or not and this leader must receive the acknowledgement from column of the cluster to make any progress so these safety guarantees are roughed provide consistency and then partition tolerance and then that client doesn't need to reason about cluster membership like which means client request just like automatically forwarded to leader node and then yeah scd has the most widely used roughed implementation so carcash db and then ti kb and then many other like project rely on it and then they also contribute back to scd you know which is great and then it is very stable and then reliable next slide please so scd write distributed consistent log over roughed for for durability and then this underlying storage layer is right ahead log like wall so let's say client send a write request to the scd server and then this proposal first goes to leader and then when the proposal has been agreed by the column of cluster like leader commits that entry and then when we commit we append that entry to the wall file and then this committed log entry is persistent and then when we say persistent it means fsync down to the disk like which that gives durability and then if this machine crash like we can just restart the server and then the server can just replay the logs back from disk and then in order to avoid running out of disk space we break this into small files like periodically purging the old ones so for and then for performance like each segment file is preallocated 64 megabytes so there we don't have any latency for metadata update or allocating blocks and then buffering is also special like in that like writer flush is only on full sector right or like when explicitly asked and then scd flushes wall logs to disk for every four kilobytes and then also for consistency we keep rolling crc and then also safe against the writers so small list right you need to single what record is rough entry or a rough hard state and then this each record follows like 8 byte data alignment so let's say one disk sector is 5 per byte and then what record is a thousand like 22 byte then the wall like our world encoder add like two padding bytes at the end to make this like fully sector aligned so assuming that like sector disk right is uh or or nothing so writer would never straddle like on record across disk sectors so yeah this is how we prevent right tears and then partial like right next slide please so scd has a separate backend database because like war is only for appending rough entries in binary format so we need a like nice nice layer on top in order to represent like actual key value data so scd version two only keeps the most recent key value mappings discarding the older versions however uh yeah this is not good because like the watch plant may miss the discarded event from brief network disconnections so to avoid this unpredictable window scd version three api retains the historical key revisions through multi-version concurrency control model so this retention policy for this history that can be configured so i know kubernetes used one hour and then typical scd cluster retains the the superseded key data for hours and then to reliably handle longer client disconnection not just transient like network disruptions scd version three what what's api can simply resume from the last observed like historical revision and then the scd backend database has two components like one is in memory b minus three and then the other is b plus on this database which is both db and then each right increment modified revision as a global counter and then in memory b minus three index this like each key to this revision and then each node is uniquely identified by the key and then contains historical revisions on this b plus three stores the modified revision as a key and then the key value data as a value on b plus three next slide please so we spend as much time to implement for tolerant client so what that means when there is a transient disconnect or network partition we expect client to automatically fail over like or do more efficient retries using so like our client using grp so health checking protocol and then http to ping so and then this is extremely important and then at the same time it's really hard to get it right so last year we spent like several months to implement this feature and then we even backported that like huge feature to scd version 3.2 for Kubernetes users and then you can read more about this in the in our docs in the slide and then yeah for just to review the whole data flow so we have our client talk to scd servers using grpc and then grpc calls can be either unary like world streaming rpc and then server front end handles the transport to talk to other peers and then implement the quota layer to put the cluster into maintenance mode like when like the data exceed the database size limit or like when it finds the data corruption and then mvcc layer like implement the multiple and concurrency control and to retain the historical data and then also implement the watch storage and then world db is an embedded key value storage engine that scd uses to persist this data on disk and then roughed layer to handle the log replication and then we have a world storage to persist roughed log log entries on disk next slide so yeah we get contributions from from all over the world like core maintenance like are very well distributed myself work for aws and then i think xiangli is is also here so he's the creator of scd now work for alibaba cloud and then we have a job batch from google cloud Kubernetes team and then also a lot of maintainers from reddit and then we want to have more consistent many maintainers like which is why we want to donate scd to cncf and then we also we track all the user issues on github and then we have a bi-weekly community meetings to discuss any outstanding scd issues and then share development development progress for engineering side we spent a lot of time for testing like scd has very limited set of features so reliability is our highest priority so yeah scd functional testing verifies the correct behavior of scd under like simulated system failures and then 14 network and it's set up on scd cluster under high pressure load and then continuously inject failures into the cluster and then expecting the xcd cluster to recover within a few seconds yeah this has been like extremely helpful to for us to find critical bugs and then yeah next slide yeah so this is the roadmap so in the meantime we want scd to be better integrated with both upstream and then downstream so such as grpc and kubernetes and then yeah so and then we also planning to add non voting member support like currently scd and member ad operation can be quite disruptive so let's say when a new member comes in to the cluster and then scd leader has to replicate all the logs from beginning or send snapshot to the new member so this is already a lot of work for like scd leader node and then this is even worse if the new member partitioned or being slow it can affect the cluster availabilities so once we have the non voting member feature we or we call like raft learner as an additional state in raft implementation like this new member joins the cluster as a non voting member like before disruptive configuration changes happen and then yeah in that case the leader still replicate the logs to this like learner node but it is not yet counted for our like quorum so once this new server has caught up and it can like promote it as a regular node and then count it as a quorum but while the learner node is catching up scd does not need to wait on the like a new fresh node for cluster wide consensus so this is one of our feature we want to add in the next release cycle and next slide please so yeah so yeah so we want to propose scd as a cncf project so i believe like cncf can benefit scd for a lot of things so right now we have our shared google cloud account for our release process and then also for testing so we have been using this cluster i mean we have been using this account since the early days of coro s but now it is not clear who gets to pay the pay the bills since like teams like distributed across different companies and then hopefully we can get the better ci support right now we use ci free like ci service and then once we grow the project we may need more resources like more computing power so we might need like dedicate Jenkins in the cluster something like that and then more important with cncf we want to grow scd communities and then hopefully more consistent contributors and maintenance yeah that's it cool thank you any questions for the scd community i believe brian grant you're sponsoring this one yes yeah any questions so i think the city is good for incubating project like what do you think i'm not sure it definitely has a lot of usage yeah i'm very happy to cause constant that at incubation level this is quintuple okay okay thank you okay so chris can you follow up and help the scd team understand about the documentation process for incubation applications yep we'll do no problem okay thank you yeah good presentation thanks good point bob okay and i think it's the our socket folks next can we go to the next slide please taylor now before we go on i'd like to mention that i'm extremely keen to sponsor this project i spent a long time doing some dd with ben hail by c is on the call and the team from facebook who co-developed it with pivotal um as with gnats there is going to be an probably an initial set of questions around where exactly does this fit into the landscape so please be patient with the our socket folks i personally believe there is a really good use case for this okay so who wants to go first ben or anyone else gonna take this take us through this this is ben can you hear me this morning yep great uh so my name is ben hail i'm here i'm a long-time spring member and a pivotal employee i have with me robert rozer um formerly of netflix uh and now at a startup called netify and we also have on the phone um steve gerry formerly of netflix now at facebook so next slide please um so the the our socket um project uh came about um out of efforts out of netflix to sort of think about what network protocols mean in the concept of micro services and coincidentally in the last uh year or so the the spring team has been really heavily looking at um reactive programming generally but more importantly what it means um in the java world to start doing micro services beyond just sort of your first or second micro service and while we've been a big fan of sort of the the reactive streams pull push um back pressure programming model we also have observed that when it sort of starts to leave the jvm we run into some problems whether it's connecting with the database or making some sort of network connection to something else the the benefits of this pull push back pressure model get lost all of a sudden you have things pushing data a bit faster than the consumer can handle it or the consumer um is is misbehaving in some way that the that isn't being communicated back to the publisher and so we we sort of coincidentally started looking at what protocols might be available to us what improvements we could make to sort of take this programming model that we truly believe in and take it across the wire um at the same time uh some of the staff from netflix start or sorry from uh facebook started reaching out to us and said hey we've got this protocol that we're using internally quite a bit is this something that spring team would like to be involved with and we said absolutely yes because when we talk about the r socket protocol we're talking about uh a protocol that answers a lot of questions that we see currently in modern day micro service design so it's a it's a protocol that's message driven primarily rather than being um uh straight rpc it's asynchronous and it's multiplexed you know sort of hits a lot of the high points that straight http doesn't um solve straight out of the back um one of the other sort of side advantages that we've been a real fan of and we're starting to see more and more inside of um our customer projects is there's browser support for this protocol and we'll talk a little bit um later on about how this is achieved in sort of standard networks as well as i said before it supports these reactive principles from the reactive manifesto next slide please so one of the key things about the r socket protocol is it encapsulates some discrete interaction models rather than being really really generic and saying build it yourself there's this idea that um we've identified four discrete interaction models that sort of encapsulate what we see most of our customers trying to do so the first one we call fire and forget and this is effectively where one side of the connection can send a message to the other side of the connection and it doesn't really have to wait for any sort of confirmation or response from the other side we're just going to do our best effort delivery lossiness might be tolerable so you might see this as uh diagnostic logs or something like that or uh you know stuff that isn't absolutely critical we also have standard request response that you're familiar with from something like http where you send a request and you expect some sort of confirmation to come back maybe there's payload maybe there's not but you do get a positive um interaction from the side uh receiving the message of the message the message has been received successfully we also encapsulate the idea of a request stream so a single request may respond with multiple streamed responses but going back to the the concept of reactive programming this the stream of responses comes only at a rate that the consumer can safely consume these things so you don't end up with something like a huge amount of data generated on the server side either cash there or forced across the wire as quickly as possible and overwhelming a slow network connection or a slow device sitting on the other end of it and then finally um the uh an interaction model where anything goes basically a bi-directional full duplex channel where messages can be passed from either side of the channel back and forth at will uh with responses coming as necessary from the other end so we wanted to make one of the key tenets of our socket is that we believe that um all if most if not all uh interaction between microservices uh can be encapsulated with these four things and we have first class support for doing this kind of work next slide please so as I said um the the connection or the protocol itself is bi-directional which basically means as soon as a connection is established there is kind of the concept of a client and a server to establish a connection but once the the the uh the pipe is is connected between two entities um either side can then start those interaction models that we saw so there isn't one side that is sort of disadvantaged to the other they become equal members of the network once the connection has been established we also have the i the idea of cancellation so one of the the big downsides to HTTP as it exists today is this idea that um once a connection or once a request has been made to the server you effectively have no influence on what happens you can drop the connection completely potentially which could be expensive or not expensive depending on um the circumstances here in but the server is going to attempt to do a potentially very large amount of work and you have no interaction and no influence on whether or not that actually needs to get done by the end so our socket builds in this concept choosing of cancellation uh into the protocol itself potentially short circuiting very expensive operations if they become unnecessary as time goes on another really really important feature um that's been proven out quite nicely inside of facebook is this idea of resumability so there's uh inside of an r socket connection state can be maintained uh as to sort of a given session the data that has been transferred across that session and successfully received by the other side and this becomes really really useful in a protocol because it means say if you're trying to transmit data from a data center into a mobile device or something and that person on that mobile device is walking around on the street but eventually walks into a starbucks and flips over to wi-fi technically the network connection has been dropped and if you had to enumerate all of the previous set of data um in order to uh or sorry enumerate a a fixed state of data so that you could then start taking updates from it it becomes very expensive both from a network perspective and from the server side perspective to sort of regenerate the state where things were the protocol itself supports this idea and implementations are free to choose how exactly this works this idea that um you have a session and even if there has been a network interruption you can then resume that session where you were and you're only responsible for any messages that had been uh sent since the last message you had um that had been successfully consumed we have the idea of application flow control both between um two connected peers so there's this idea of back pressure saying that only a certain amount of data that can be consumed by the by one end can be sent by the publishing end but there's also the idea of leasing which is effectively a load balancing concept that makes it so that clients can't overwhelm servers as well they they are um handed out uh fixed numbers of requests that they are allowed to maintain uh sort of giving a client side load balancing kind of behavior and then finally we also support the idea of fragmentation of individual frames uh as data is sent especially when it's something that's a large piece of data say photos or a video or something like that to help networks it's very often um very useful to be able to fragment those payloads next slide please so are those are sort of the features of rsoc the things that it promises to do for you but one of the the key things that really attracted um the the pivotal team is this idea that it's really really flexible so we talk about this as a protocol and really it's only a network framing protocol it's completely transport transport agnostic so it can be routed over raw tcp which we see a lot of people doing if you only have access to htp1 you can do it with web sockets if you have htp2 it builds very nicely on that and even exotic protocols such as aeron udp protocol um can all sort of benefit from the rsocket layer sitting on top of it it's also payload agnostic so we'll probably see a lot of users trying to send proto buffs across rsocket but it's absolutely not required that that be the thing you send you can send jason just as easily maybe you're a company that has your own custom binary payload because our socket is just a framing protocol allows you to put any bag of bytes you want to inside of the payloads that are sent we also liked that it's very much programming model agnostic inside the spring team we are very big fans of the messaging sort of abstraction that you fire off a message and it can be routed to some arbitrary piece of code on the other side based on some routing tag attached to it so we like messaging but we're fully aware that there's a big movement for more rpc style things like g rpc and so our socket can support either of those styles depending on what you want and then finally and this i think goes for for all of the other companies that are sort of working inside of inside of the rsocket project it's language agnostic so there are really powerful implementations of both java and c++ with a java script coming on but we also see them for things like kotlin and you can envision places where something somebody could implement it with python or you know go or something like that you name your your favorite language so if we take a look at the next slide this is sort of a graphical representation of it this idea that the rsocket protocol is this blue layer in the middle that can stack all sorts of different things sitting on top of it while it builds on various different protocols and this is one of the ways that we are able to to support um browsers as a first class citizen in a protocol like this is it can use um websocks or htp2 where they're available uh rather than sort of you know some some tcp based uh protocol that may not be routed by various proxies and routers next slide please okay so um there's uh the next couple of slides are just sort of a comparison between grpc gnats and rsocket which are sort of the uh closest possible um uh competitors or closest possible analogs to something like rsocket and i don't think we need to go through all these like we put them in here as reference for you to take a look at them but um the the key thing is that uh in general our socket um aims to be this sort of layer in the middle so a lot of things that you'll see grpc or gnats potentially do it will do sort of built in so whether it's something like cancellations that gnats doesn't have but grpc does in a limited way we can it's sort of built first class into the protocol it's a full duplex um uh protocol as we described earlier where once a connection has been established either side can initiate interactions back and forth uh next slide please we have as we described a little bit before um the idea of fire and forget for lossy kind of things we have resumability built in as a first class citizen there's flow control based on this reactive streams protocol that's sort of well proven out especially in the java world but starting to to get to um alternate locations or sorry alternate programming languages as well uh next slide please and then this is sort of encapsulating what we described a little bit before like the various different language uh languages and frameworks that support these things today and certainly our socket um we'll talk about this a little bit later on our socket has very large strongholds in java in the java world coming from netflix and pivotal but also in the c++ world for um the the facebook team hey uh ben it's slightly irrelevant but the creator of gnats was actually vmware and then okay he's done sorry vmware and then um app sarah and now it's it's synadia which is a dedicated company behind that behind that okay uh next slide please so our socket today um there's over 600 github stars between sort of the the big um projects and inside of it that we consider so java c++ kotlin and the main stack itself and the contributors as we said a little bit before facebook and netflix are sort of the the top level um uh high visibility contributors and users of this protocol um but spring and project reactor our reactive streams implementation uh inside the team at protocol or sorry at the pivotal are very very big on it as well it's something that's a big tent poll for the next year for us and then finally um i'd be remiss if i didn't mention the netify team um former netflixers who've gone and are working on um had built an entire company uh around this protocol and what it can bring to enterprises and next slide please final slide here um we really would like to get into the cncf uh because we do have three very large companies that currently invested in this very heavily and one of the the key things that we observed when we first started uh uh contributing and collaborating is there is a huge interest in having a neutral third party to do this kind of work we want to make sure um that uh the cncf is a place where we can all go in there isn't a ton of politics going on between the three different teams not that we we think there is generally but it is nice to have that sort of neutral third party to to help with this so our socket itself is ideal for managing or for connecting microservices themselves and obviously a cncf is a great place to start talking about that a lot of microservice pay workloads are going to be going on to cncf projects like kubernetes and things like that so we think that this is a great place to sort of start standardizing a protocol that can help um uh help those kinds of applications have a close buy we want to expand the our socket community beyond our sort of java and c++ strongholds obviously kubernetes is so polyglot these days it is a great a great place for us to get in front of a bunch of different language authors different uh microservices authors working in languages beyond our our sort of traditional core competencies and we hope that they'll be able to to see value once we have um access to them through the cncf and finally we want to integrate with the other cncf projects um where we can because there are a lot of advantages to our socket over something like straight http or even http2 and so we want to make sure that um that the the advantages that our socket actually brings to the table um can be used by various components inside of the cncf and inside of sort of the kubernetes ecosystem it's not just a one-way street where our socket's getting a lot of help from being in the cncf we want to make sure that we get back to the cncf and make those projects better if we can be uh and that i believe is our final slide are there any questions who is uh using our socket um well right now the biggest user of is probably facebook and then uses at netflix right now so those are the two big users of it and then like i like ben said um evidals interested in looking at it and then the some of the companies that we've been talking to doing some pocs with but um for context the reactive streams and rx java rx java is the number two number one star java project on um github it's very popular in android and like ben said one of the big problem for it is once you get to the network there's this very clunky set of abstractions like circuit breakers and whatnot this basically fulfills a huge huge problem that felt in there so that's kind of the the tie in for the community people are interested in it so if you take a r socket you also have to kind of include um all the reactive streams users as well sorry how large is your community do you have um people from multiple companies contributing or or volunteers um pivotal facebook netflix um notified so those are certainly the the big four contributors as they extend now but i believe you know if you counted them out there's a bunch of individual contributors i know we have a fair number of sort of uh small consultancies in europe who have done significant contributions both to spec improvements and to um java implement to the java implementation as well excellent thank you yeah chris asked in the chat um there is a spec but what's considered the reference implementation i'd say um the reference implementation today is probably the java one but the c plus plus one is also very very close neither of them are 100 percent implemented um but we are working very hard certainly on the java side to get there okay cool thanks i had a slightly more technical question so so many of the or several of the features thinking in particular of flow control um these kind of things are are actually provided by the underlying network protocols tcp hcdp2 etc um could you just speak a little more about how much and why you added more on top of that and didn't just rely on the existing flow control mechanisms that is a good question um so there's many times in a distributed system where you have plenty of network bandwidth but in it but your application receives an expensive call right so let's say you have a large panel like a meg and you can just rip through them all day long and you can use the tcp buffer to stop your system from being overwhelmed but maybe you have a small size package like one kilobyte that goes off and begins to do a series of very expensive um operations right so by what we do is we actually arbitrage between the network level flow control and the application level flow control so this actually brings it up to what the application is seen so you can begin to go ahead at an application level and prevent very expensive calls coming over the network like that and begin to alleviate the need for some of the circuit breaking that people have been putting in place which is one of the reasons that this was began to be designed at Netflix and then the second thing is the the flow control of our socket actually is composable between multiple services so if you had a a chain of services the flow control actually will be propagated um through the chain so if you had three services think of a b and c right and if c is slowing if c is a slow service right but b has plenty of network bandwidth it can get overwhelmed very quickly right but with application level flow control it can actually propagate up to the caller a to have it slow down to prevent thundering hurts so that's one of the reasons we actually moved up the flow control to the application level as well and then that ties into the reactive streams um libraries that are provided at the java level and there's a c++ version hey guys we're out of time and people just want to say for me the usps of this are distributed rx java um good support for streams and in particular this federated flow control which i think uh the use case that you mentioned when we first spoke but you didn't bring up today is you've got mobile phones connected to the back end yep okay listen we've got to go um i anticipate there are more questions please use the public list for more questions and the github issue um and we're obviously looking for someone else from the toc with a vote to be a sandbox cosponsor would need cool all right thanks all thanks lexus all right all right take care all you too thank you