 Mojši je, da sem Pavel Beni, včetnja za toga in druga za tudi nekaj, da smo tudi so povedali, da smo tudi povedali, da smo povedali, da smo povedali, da smo povedali, da smo povedali, mpetnij spolj. Pobjel je, že vsega tajepa je več čas. Zato tako, da vidimo tako čas na način. Vsega je mpetnij spolj, to je dolgo v rukih. In potem zeljimo, da jim je najbolje, ko vsega vsega terboje in šelj v svetu. in da je tudi nekaj nekaj počusti. Vse je mpetnje tk. Mpetnje tk. je vsezaj, da je tukaj tukaj tukaj, ki je vsezaj, da je vsezaj, tukaj tukaj, tukaj, tukaj, tukaj, tukaj, ki je bolje veliko, vse začeli tezovati, zelo da se časno nekaj, vse, ki je jo je poslednje, časno, ker je začeli, vse, ki je začeli, prej začeli, da se moj tako, bo spremalne pridine, pa zelo, da se našli, sredna, in da se početno, pridanje, da ne bahe pjanje nekaj prkutih mpc, ni nekaj da bomo z njetnjih tajovih tajovih tajovih nekaj tajovih tajovih, in prišli z njetnih tajovih tajovih, da se je sveče vsega pravduka. Zpravno, rečjene, da je tračnje vsega pravduka, in prekut po vsega pravduka, kar vsega pravduka je vsega pravduka, kaj vsega pravduka je vsega pravduka, nekaj je pravduka je vsega pravduka, lažino vzdi kakva bila pri vsega sev se, Journey 6, 8, 2, 4. In načino nam je mpTcp večanjo? Zbizivno imeljo vse nihoe 3rd kcp ljube, in prilustaj za taj included, vsega zelo. Sem je vsega zelo, kjer je ne glasba tega mpTcp ljube. vse subflow možete pošličiti na vseh mp-tcp datastream. Vseh mp-tcp vseh vseh vseh mp-tcp pošličiti vseh mp-tcp vseh mp-tcp in pošličiti vseh mp-tcp vseh mp-tcp vseh mp-tcp datastream. Vseh subflow je vseh mp-tcp konekčen, norma in regularva vseh mp-tcp konekčen, vseh mp-tcph pošličiti vseh mp-tcp prevzakovac, tako, da je tudi prezentáva in tudi poslust, da je priha in da se vseh vseh servicije ima pošličiti na dedovir, If one of the peer does not support the mptcp extension, it will just ignore that option, and that, as per mptcp specification, will cause transparent fallback to play in tcp. So one peer can just tentatively try to open an mptcp connection and end up with a play in tcp one. The first step for mptcp transfer is to establish a new mptcp connection, which happens via the mptcapable inshake, which is done simultaneously to the 3-way inshake connection open. The initiator will add an mptcapable suboption to the scene, the server will apply in a similar way. At the end of the 3-way inshake, if the mptcapable inshake is successful, both peers know that each other is able to speak mptcp, and we have exchanged enough metadata to synchronize the data transfer and to select the initial mptcp sequence number. We can look at a little bit more detail of what really happens. So the initiator, let's say the client, send the scene packet, which carries also the mptcp mptcapable option, which is a variable length option in the scene that carries very little information, the protocol version, and that's it. Well, this is for mptcp version 1, which is the thing that is currently under development, it's trying to be upstream in the new schema. Version 0 is slightly different. Let's focus on version 1. The server should reply with a synac, which should contain mptcapable option, which includes a key for the server, which is a plain 64-bit integer, and will be used later for several things. The 3-way inshake is completed with Client Tirdat, which comprises both the server key that the client has just received and the key for the client itself. Be it that you have a poor hack, we don't have guarantee for this reliable delivery to the other peer, so these information are replied, are present also in the next data packet. So, instead the first subflow has been established this way, each peer can add multiple subflows to the same mptcp connection. Each peer, meaning that the servers, for example, can theoretically open a subflow back towards the client, using different local address, different remote address, even different server port, even different address family, meaning that a single mptcp connection can use both IPv4 subflows and IPv6 subflows. That is to make things easier, obviously. Subflows are added using the mpt join and shake, which is quite alike the mptcp shake. It takes part at TCP opening and uses some information exchanged in the initial mptcp shake to identify the mptcp connection this new subflow wants to join. Let's see a little more details. So, here there are the two hosts, the same two hosts that were involved in the previous exchange. The host A is using a different local address and is going to be a different remote address. In the initial same, it includes in the packet this mpt join option, as you can see there is gradually a typo. I'm going to explain one. The mpt join includes a token, token A, that is the typo, should be token B, because the token is an information derived from the key that the two peers have exchanged in the mptcp shake. The token is basically a cryptographic hash that is part of the cryptographic hash function computer on the key. The token identifies in a unique way this specific mptcp connection. So, the token B, the client, identifies the mptcp connection, as we created before using KB for the server. And also add an additional random number. The server applies with CNAC and including an mpt join option, this mpt join option includes an additional random number and the HVAC computer using ASCII, the concatenation of the two key exchanges in the mptcp exchange and as much as the concatenation of the two random number just exchanges. This quite fancy complicated stuff is used to somewhat strongly even authenticate the server. Strongly authenticate the peers so that so that that host could enter this mptcp connection. And the client respond with the three way and share app, including mpt join with an HMAC computer in a similar way, just walking the data and in each step each peer should validate each data, for example on the initial scene, the server should verify that the token provided correspond to a normal mptcp connection on the CNAC, the client should validate the each map provided and so on. And the join exchange is completed with this four pack that accept the data sent by the client on the previous pack. If the mptcp shake at some point, the connection simply falls back to plain tcp if the mptcp if the mpt join shake at some point, for example for a bad token or the option for any reasons the connection is closed should be closed with the reset. So, once that we have one or more subfloor created we should likely transfer data and we would like to transfer such data using either of these subfloors and this data is likely referred, for example, to the same video stream so we must ensure that the data reception is consistent and how can it work works using additional mptcp option which is the DSS that stands for data second signals the data second signals contain and specify the mapping the tcp level sequence number carried by this even packet data and the corresponding mptcp level sequence number space because mptcp is independent sequence number in respect to the each subfloors the DSS option is also used to send back mptcp level act since different subfloors can use different link with different link speeds and can experience different tcp level transmission the data sent by the tcp in order this subfloor can be received on the other end out of order in respect to the mptcp sequence number so the endpoint to collect the data reorder that in respect to the mptcp sequence number and deliver that way to the user space the sender itself must keep the data until it receive mptcp level so after both this confusion the and shake specify simple this should represent DSS mapping we have two packets belonging to two different sub streams they have likely for example the same tcp sequence number but due to different DSS data sequence number they match to different position inside the mptcp sequence let's say the DSS exchange is quite simple one peer just send the packet data it includes the DSS option which specify the data sequence number for this data and includes subfloor data the peer must send bot on the mptcp level mptcp level mptcp can be in different packets with the only requirement that the mptcp level act should come not before the tcp level act so we have seen that both peers could create multiple subfloors using different local and remote IP addresses they can know these different IP addresses for local configuration but the protocol also offers a way for each peer to notify the other for additional IP addresses so that the peer could add more subfloors towards such addresses and this is just another mptcp suboption and other which is used to notify the peer of additional IP addresses the protocol does not mandate any specific action in response the reception of an add add option even if likely the expected result is to create additional subfloors at least on the client side pilot with the added option there is the remove address option which notify the peer about the removal surprise surprise local IP address and that should be used by the endpoint to notify link failure and the and the unavability of IP binded to the link that just faded again that shake is quite simple one peer could be the server of the client adds this option to any packet most likely to a pure packet the option includes the IP itself an optional port which could be different from the server port for example is used later from the remove address option to identify the address to be deleted since this option is usually carried by a pure tcp app and we don't have guarantee of delivery of pure tcp app mptcp v1 mandate an explicit mptc level app for this option which is a reply with a similar added option including the eco flag and the same addresses information so one of the goal of mptcp is to ensure reliability in respect to link failure the mptcp connection survives the reception of a tcp level fin or a tcp level reset that means that the mptcp protocol requires an additional way to explicitly close the connection and this is done again via the dss option the data signal data sequence including an additional flag the data fin which mark the last sequence number carried by this packet as being the last one for this direction so a data fin closes the mptcp connection in a single direction to data fin one protection to completely close the mptcp connection as tcp fin the data fin consumes one single byte in the mptcp sequence space this is the data fin exchange we have this gss carrying the data fin flag carry that also data but not necessary the connection is considered close when the pion applies with the data hacking the data sequence number corresponding to the data fin so this is almost all for protocol specification based on what I am talking about today there are many other details in the protocol there are other options others mptcp suboptions available for example to have a faster shutdown and also to have more let's say verbose notification of shape failures the protocol also describes some functional components standard mptcp implementation we already entered the existence for example the scheduler is the company in charge of selecting which subflow the protocol should use for sending a specific bunch of data but my Azure is in charge of creating additional for making decision about the creation of additional subflow or modifying the peer about additional ip iders to be used for this mptcp connection and also the standard many let's say quick to allow interoperability even with very partial implementation is this stuff really new not really because the protocol itself at least for version 0 is 7 years old on the other side version 1 is still a draft so it's very whatever not many major operating system offer core implementation for mptcp with the only exception being iOS and also the upstreaming effort for mptcp started quite some time ago with the first batches that surfaces back in nearly 2017 on the other side the first mptcp related strictly related batches and been merged in the next 2 weeks ago and there are only the first few bits so there is an ongoing effort to have this thing in the new scanner this effort is being developed by several companies including Intel, Apple, Tesla and you are truly right now we will see what has been done in the past months and what is currently ongoing and hopefully what we hope to achieve in an hopefully not too far future but first there are a lot of good reasons why there is not yet working implementation inside the new scanner inside the vanilla new scanner but there are also good reasons why we do want working in upstream implementation there are effective use cases requested by telcos primarily and there are several countries especially Korea I think and Brazil that this piece is currently in use so the first let's go back sorry we have said that the mptcp protocol protocol needs to exchange quite a bit of metadata in several moments in the mpt couple shake the amount of metadata is actually so large that up to a little time ago there was not enough room inside the vanilla scanner socket buffer to carry such amount of metadata and extending enlarging increasing the size of the SK buffer is in a go because it is extremely critical another structure for the vanilla scanner increasing sizes several in performance implication which we don't want to it that regards sometimes ago the SKB inspection infrastructure has been introduced that allows our system to attach to SK buff and arbitrary amount of metadata and such infrastructure is already in use from several systems and soon hopefully mptcp one problem we had while working the streaming of mptcp is trying to introduce additional behavior on top of tcp sometimes ago this was not possible as the scanner did not provide any facilities to plug this additional behavior in late 2017 ULP infrastructure infrastructure has been introduced PSTN stands for upper layer protocol and it allows chemical to plug additional behavior on top of tcp replacing for example the procedure operation related to a given socket and attaching also additional context to a given socket in order to satisfy the needs of mptcp we had to extend this ULP infrastructure in at least two points one was allowing a listening socket to let the accepted socket in the ULP context in a robust way that this required a new ULP helper for the clone operation and also has been added a protocol diagnostic helper to allow the ULP to push to push to the user space more protocol specific information so as I said the streaming effort is currently running the first mptcp specific patches have been merged early January the first basic mptcp implementation has been formally submitted this week it supports a single flow for now and parsing the basic mptcp option but I strongly suggest you to note all your bread why think for such patches to be merged because the prerequisites took a lot of time it was quite simple thing and we expect this to take still some effort anyway, when and if these patches will be merged links will be probably the first operating system in mptcp version 1 as iOS currently for example supports only v0 even if it should support soon v1, 2 we have a upstreaming project with repo where we stage the patches we plan to something upstream in such repository there is a working implementation for the mpt join and shake and that brings us to future plan after getting the pending patches included to push the mpt join support and enough of the mptcp protocol specification to allow the active backup scenario which is of some interest for Santa cooperator we have most of the code for this feature really just a matter of getting merged pun intended after that goal will be to support real, concurrent sub flows and this is also something which is strongly encouraged or required by telco with the use case to deliver ultra high definition content to existing mobile devices using concurrently multiple connections like wifi for g simultaneously this require bit more functionality included more advanced path manager and more advanced flow schedule and this is pretty much all I have so I feel I lost too fast there are a few references to the mptcp rfc and to the upseaning project and for whoever wants to contribute or just be informed about the development activity, there is the mailing list where much for this fashion takes place before official submission and that's it, so if there are any questions? the question was if mptcp v1 is compatible with v0, no? well if mptcp v1 only pier tries to connect to mptcp v0 only pier that and it will go back to playing tcp it will be upgraded so to v1 we have talked with apple guy there was another question there good question sorry the question was which are the advantages of mptcp over quick quick is a rising protocol which uses which is built on top of udp and ensure many things among them reliable delivery and also it has support for multi part one advantage of mptcp it has completely transparent almost completely transparent for user space application while user space application needs to be ported to quick because even the basic protocol let's say the creation of the socket itself is a bit different yes anyway quick is a competing sort of also competing protocol for mptcp even quick and it has only user space implementation this is sort of a pro or a cons depending on your perspective and maybe one follow up question does the counter needs to know about mptcp or is it something completely transparent so the question was if routers need to be aware of mptcp especially in regard of network address translation so the router does not need to know anything about mptcp they just need to avoid dropping unknown option there is not really some router do but that's another topic in respect to network address translator the mptcp protocol rfc cif convention to that and this is why for example in the other option there is id field which identify the address independently from the effective ip address value that said if you are behind nut you still can't connect to the ip address that mptcp here announced that's where it was i think it was i think you just answered my question so what real world use case for this you mentioned iOS so what is it used for so beyond 5G use case so the question was which are real life use case for mptcp beyond the 5G which I honestly don't know much sorry about that ISP are asking for mptcp specifically to deliver to existing mobile phones ultra high definition contents using the existing link that is wifi plus 4G and also to allow good quality of service not from an important point of view from your perspective when using mobile device you are immobility voice call on top of data connection on top of wifi and you work outside access point reachability area and you switch to 4G connection and you want this data stream to be still alive sorry so the question is when you have multiple subflowers on multiple paths with very different transfer rate the slowest one can affect badly the overall transfer rate are there any scheduler level argument to compensate for the transfer rate is there any scheduler level argument to compensate this they they are not specified by the standard they are referred as implementation level details yes there is quite a bit of logic behind scheduling data on different subflows there are version control algorithm similar to TCP but on above layer still in the basic scenario where you have a very slow path really faster one it's quite easy to use both path fully just looking for example at the congestion window the sender has to pick the subflow with the widest portion of the congestion window free I think that was another question I have a client I got a malicious mvcdcp server and started sending me address things with random IDs tens of thousands of those so we would like to reconnect all of them we would like to reconnect them so the security server is authorized to give me this ID to connect to so the question is security related let's suppose there is a malicious mvcdcp server that send any kind of large amount of other option forcing the client to connect to whatever address is the server is somehow legitimate to do that being part of the initial capable and shake to know in the token, the keys which make etc the protocol itself does not mandate any compulsory action in response to address the expected action is to create a new subflow when you receive an address but there are several constraints some protocol basis for example the maximum number of address for subflowers is 8 bit 256 beyond that there could be a protocol constraint we don't support a protocol could not support a protocol implementation could not support more than a certain number of subflow and beyond that there are reasonable safeguards put in place from the implementation itself nobody is planning to let the client open 100 of random connection the question is if the IP address provided via the address option doesn't work, it's not reachable what is the expected behavior I need to double check but the recommendation from the RFC is to not use a failing path for a reasonable high period of time after the failure itself manifest the question is is behind that and all the addresses it has are behind that so that the server can reach it directly how can the client establish new subflows does it depend on server sending additional address it does not it is allowed to establish new subflows according to its own policy for example if you have multiple routes towards the same IP address on different links you can easily establish multiple subflow using the addresses bound to each of these routes as in the example you did the server is out on 14 and we have all the information to establish at least two subflows any other questions so the question is about comparisons between nptsp and sctp I think the guy over there is the more appropriate person for this question because I know a little about sctp sctp is a quite complex protocol which allow several features like the reliable delivery of the contents datagram service, stream service multiple paths and so on as such sctp tends to have quite complex implementation and vice tends to be less fast than sctp for example so to achieve I guess bandwidth in the aggregated use case for example mptsp would likely perform better should I repeat for record sake so tcp is much more mature protocol with much more testing sctp and with mptsp we are reusing a lot of tcp feature from congestive control till the basis for robustness and performances and it's very difficult for sctp to fill the gap towards tcp it's getting a lot of love for several guys other questions? the question is what does application need to do to register with them? the question is what should an application do to use mptsp actually it depends on the specific implementation for the IOS implementation the application needs to do exactly nothing there is a system 1 setting say every tcp connection is actually mptsp connection and that's it for the upstreaming implementation we use a different IP protocol number so the application has to change the IP protocol when creating the socket and that everything is exactly the same an IPPF program just to say something can plug this behavior that is forcing this mptsp connection instead of mptsp socket creation in completely transparent way for the application a question over there? so the question was about possible out of order delivery of mptsp data to the peer so each sub stream guarantees in sequenced delivery for its own content the problem is that the contents are carried by this upstreaming in quite independent way for example if you have two streams you send the first 10 bytes on stream 1 and the next 10 bytes on stream 1 and simultaneously the first 10 bytes on stream 1 you send 10 bytes on stream 2 and these 10 bytes are located from mptsp 7000-10,000 because you make room for the data you want to send on stream 1 so it may happens that these 10 bytes located will reach the server much earlier than other data you send on stream 1 and then from mptsp sequence server point of view will make appear the data out of order so that it is a protocol as to reconstruct the appropriate sequence using the DSS option and that's it does it answer the question? the application will not be anything we will receive the data in seconds because the protocol is at the kernel we reordered that as needed you have to have to implementations if it is not supported by the receiving side or whichever side let's use mptsp for everything question over there, sorry there are multiple sub streams those sub streams are multiplex over single connections or multiple connections will be created so that there is no blocking of one sub stream if there is packets not in that sub stream but not in other stream so first of all I'm not sure I got completely correct the question is if we have an mptsp connection on a single path using multiple sub flows not your sub flows that is possible that's not very useful each sub flow uses a specific tcp-level connection five-way tubal for everything sub flow will use a different tcp connection we will have different tcp connection other questions? the question is which protocols are used on top of mptcp the most honest answer I can give is I don't know I'm betting on each tcp other questions?