 I hope I pronounced that right, are going to talk about IPC. So a round of applause for IPC in one, two, three. So thank you for the introduction. So sorry for the short delay. There was some problem with the connection. So welcome to our talk, IPC in one, two, three. I'm Dimitri. This is Sander. And so we are students, researchers from Ghent University in Belgium. It's not very far from here from Brussels. And we work on topic, our topic is next generation internet. And if you saw the presentation this morning, we work on European projects, European funded projects like Irati, Pristine, Arcfire. They are projects in the topic of future internet. And we do research on test batch which are funded by the European Commission and by the US. Fire, actually the person that gave the presentation this morning is very involved in this fire project and the NGI is the successor of fire. And we also use Gini which is the global environment for network innovation in the US. And they are very large test batches that provide us with a lot of servers and interconnection so we can do large scale experimentation. So why do we need the next generation internet? Well, if you have been following the news, there's a lot of things that can go wrong. There's a tax on the infrastructure like on servers where you do DDoS attacks, scalability problems where your BGP routers are overflowing with entries. And there's also a tax on the physical infrastructure from sharks apparently. They are attracted to undersea cables by the magnetic fields and they disrupt the connections. And also the Russians are developing ships that could in theory break undersea cables. So there's disruptions of the infrastructure. Also internet security, there's a lot of hacks going on. Bugs and heart bleeds are very important. And also there are problems with security. We all know the Snowden leaks. And your data is actually worth a lot to a lot of companies and it's not only the government that's after your data but also a lot of private companies, they data mine everything. So our methodology is that we basically, well, we do experimental research and it's quite hard to do experimental research on the internet of the future because it's not here yet. So we develop it ourselves. So we develop a lot of tools and software and we then deploy it on our testbeds and then we write research papers about them. So what we're going to talk about today is about the stuff, the prototypes that we have in the developing. And we are looking at the problems like reliability and privacy on the internet from an architectural perspective. So what you've all been taught in school is like the seven layer OSI model. And they try to build networks from the perspective that every layer has a different function. They try to split layers by function. But when you look at how it's implemented that's not always the case. So encryption is in the presentation layer by OSI but it's implemented in transport layer with TLS. It's usually also in your application. And there's a lot of technology crossover even in the lower layers. So recent developments for internet they are actually implementing routing which is typically layer three into layer two networks. So there are also technologies like MPLS at layer two and a half, VPNs which don't really fit the model, tunnel IP tunnels don't really fit the model. So a couple of years ago there's a guy that wrote a book, John Day and he proposed an alternative architecture for large scale networks and it's recursive. So every layer is exactly the same. So we don't say every function is in a different layer but all layers have all the functions. Possibly you don't have to implement them if you don't need them. And the only way that the layers are different is by scope. So you have a big network over a small network and if you need a VPN you just run it on top. For that to work you need an identical API between the layers so you cannot stack them. And currently that's not the case. So you have the Sockets API. You can access every network layer like you want. So if you want to run TCP transport you would run an internet address family and say I want a streaming socket or if you want UDP you would use a datagram socket. But the API is a little bit different for every layer that you want to use. So what we have been developing is a prototype called Uroboros. And it's what is Uroboros. It's a decentralized packet switch network that is based on IPC. So the API is based on IPC. It's redesigned from the ground up. It follows this recursive model. It blurs the difference between IPC, between local networks, between world barrier networks to the application developer. It all looks the same. It looks like IPC. It gives you, we hope, a better service than you are used to from TCP and UDP because we have different ways of implementing this functionality. It's increased privacy security anonymity and it's a very simple API. It's the simplest API that I don't know and that's where we're going to start. So we're going to look at the Uroboros API. So you have your computer. You have two process sitting there and what happens, your kernel is giving it two process IDs. So we have this layer. We will gradually tell what's in there. So the layering, I think, which one is mine? So the recursive model always works over a layer and now it's very abstracting but we'll get to what a layer actually does and what it consists of. So you have a client and a server in your local PC. So it's just one machine and it gets a PID from the kernel and we allocate something that's called a flow and it's an abstract construct. It's a bi-directional pipe where you write packets on one end and you have a reasonable probability that you can read them from the other end. So that's a flow abstract construct and the function of the layer is to provide you with a flow. So the first call is flow accept. You have a server application, it will accept flows. It returns you something that we call a flow descriptor and any resemblance with a file descriptor is purely coincidental. So on this client side, you have the call flow allocate which will start a flow towards the server. You could do that based on the PID but that's a little bit difficult. I mean that would be every time you start a server it gets a new PID. So we have a second function that we implement and it's called binding. Assign a certain name in whatever namespace to that process. You register it in the layer. It's a function which will come back later and you can allocate to the name. That's roughly the full API. After you allocate it, so it's IPC in one, two, three, these are the three functions. Binding names, registering the name in the layer and allocating a flow. After that you can read and write from the buffer. The signature of those calls is exactly the same as you can write from your favorite system calls and when you're done with communication you do allocate the flow. Your kernel currently doesn't know these calls so we had to implement them for the system and we chose to do it in user space. So we have a user space system. We implemented this in C89 just to make it as portable as we could and keep very low dependencies and it's based on POSIX 2001-2008. Mostly for the trading, we use a lot of B-threads and Mutex is robust. Mutex is if they're available in the system. It runs on GNU Linux, on FreeBSD, on OSX Sierra and if you have the Linux subsystem for Windows it works perfectly fine there. There's some work to get it to run on GNU HURT and on Android. Android doesn't completely implement POSIX so there's more work and we haven't done that yet so the prototype doesn't work there. So the core part of the system is a daemon. We call it the IPC Resource Manager daemon and so you can start this by just running the binary or you can start it, enable it using SystemD so that it actually runs as a daemon in your system. This is a complete source code example. It's C so if you know C that helps. If you don't know C, it's reasonably simple and self-explanatory so you have the full source code of a server and client in C so the API is extremely simple. You have the server which will accept the flow, a client which allocates the flow to a certain name. We hard-coded that to be echo. The client sends one message to the server, the server sends it back and they deallocate the flow so this is the output if you run there. The echo app it will start the server say I get a new flow and the client allocates it, client says hi, that's it. It's a very simple API and it's the same API always. What are the functions of a layer because that layer has to provide all the functionality that you need for two processes to communicate with each other. First of all, the bind operation is not a part of the layer. The bind operation where you bind that process to your name is something that's local to your system because process IDs you don't need to send that anywhere over any network so that's a local call in the system. The only two things that the layer has to produce or to perform is to register names and to perform the flow allocation. These functions keep track and figure out where there are endpoints of communication. That's the directory service. It will map locations on certain names in the network. It will figure out how to get packets from one point to another so it has to implement routing functionality. It has to effectively forward those packets so it implements forwarding functionality and it has to allocate and release the resources and that's our flow allocation. This is not an exhaustive list so actually there are functions of the layer like congestion control but we're not going to discuss that today. We don't have time. Let's look at local IPC over the Ouroborosub system. Usually if you have a TCP IP stack, local IPC, you use your loopback interface. It goes through the entire stack so your application delivers it to TCP, to your Ethernet, usually virtualized, goes through the kernel, gets back. In our system, we can do that as well but of course it's recursive. There is no really need for all these layers so you can do it directly over a loopback layer. We'll show you the actual commands to perform this on your system so you start the Ouroborosub system, then you start the server and the first thing that it will do is indicate to the subsystem that there is an Ouroboroscapable server running. This happens all behind the scenes so you don't have to implement, it's implemented using the linker so it's implemented before you even call Main so it just says to that system, I am here, I'm in the Ouroboros process. The next thing that you do is bind that name, that process 6417, to the name server. So the Ouroboros system knows that 6417 listens to the name server. Then we create the layer. It's one command, it's a bootstrap command for the layer itself or all the functionality for moving packets between the client and the server. Then we register the server into that layer. We don't register the name server directly. We just hash it so you can choose a hash algorithm as you like. We just hash it, it's more secure in the implementation because people cannot start feeding very large strings and it's more secure as well because if you have to send this over the network it's less legible and people aren't able to figure out what happens. And then the third step, you start the client. It's a pink client. It will send a number of messages to the server. When you're done, you kill the client, flow gets the allocations and we're done. So that's how local IPC over the subsystem works and it's always in three steps, binding, registering and allocating. And that's all that's to it. So there is very little configuration. You don't have to worry about ports or addresses. That's all completely hidden from you. So let's look at Ouroboros over Layer X. So we're not on one system anymore. We're on two systems. So to run it over Layer X we have to wrap that Layer X with our API. So the first thing, we start your Ouroboros subsystem and we create, in this case, a layer instantiation on that machine which is attached to your Ethernet device. So the only configuration that we give to the system is I want something that connects to Ethernet. I give the layer a name Ethernet and I say it's connected to my wireless interface. After we do that, I register a name in that layer. So from now my Ethernet layer, locally, it's something like ARP, but not completely we implemented it ourselves. So it registers that hash and it says on this machine if I get a request for communication with IOQ3, I'll explain that in short, I will accept it. So instead of only having our own applications, IOQake is actually an open source project. It's developed from the QQ3 Arena engine that was a GPL released years ago and we wrote a patch for it so that you can run the game over our stack instead of over Ethernet or TCP, IP. And then I bind the program which is this is the binary for the dedicated server. And so the previous time we bounced the process but actually that's also you have every time you have to look for the process ID so we just say whenever this kind of binary is started, just have it listen to IOQ3 names. So we start the server and it's done. So that's the setup of the server side. There is absolutely no configuration involved instead of saying it has to be on that wireless interface. For the client, we start the client. It's the Uroboros subsystem. Then we say again to the client, we connect it to the wireless interface and all that client does is start actually the game client and it says connect so we modify the game client so that it takes our commands. So we say connect it over Uroboros to the IOQ3 server and it does it and you are in the game. It's only three commands that you have to give. You register the name, you bind the process to the name and from the other side you allocate the name. Pretty simple. So for reliability, Ethernet is not very reliable so you can have packet loss, you can have jitter which is normally implemented by TCPIP and TCPIP is usually in a different layer. In Uroboros this is not the case so it's in the library and every program that links against our library performs its own connection management. It performs its own encryption and its own checksumming so when you have the process communicating with each other and something happens to one of these processes you can recover from a lot of crashes so only if your actual program crashes you lose your connection. It does fragmentation encryption and checksumming so we thought about Uroboros over layer X so we implemented it over Ethernet but we actually have a proof of concept that runs it directly over NetFPGA so it's not over layer two it's actually over Ethernet layer one, the physical so we're not using MAC addresses or the MAC interface it's a point to point connection over the NetFPGA implementation. We have it over layer two so that's the Ethernet that I talked about and this works over on OSX it works over the Berkeley packet filter on 3BSD it runs over the Berkeley packet filter or Netmap if you have that installed and on Linux it uses raw socket so the only thing that really does is it takes your packets and flushes it out of the network towards correct destination and the configuration has already happened before and for layer three four we implemented it directly over UDP so for all the functions that the layer has to provide this flow allocation, routing, forwarding and directory are implemented in different ways so for the Raptor, NetFPGA it's all done by us because it's over layer one for the Ethernet LLC it depends on how your Ethernet is configured but it uses Ethernet for the directory it uses Uroboros we implemented it ourselves but we could use ARP because the ARP specification it allows you to resolve any layer three address to a layer two address but actually if it's not IP most switches will just drop it for security reasons they will check somebody's doing something very very weird on this network we're not going to allow that so that's why we're not using the ARP because it was dropping our packets and then for UDP nothing is done you usually use OSPF or ISS forwarding IP and we have implemented a dynamic DNS which is implementing our directory service so now we have two systems and Sandra is going to show you Uroboros over Uroboros yeah okay can you hear me? maybe a bit higher that better? a bit? yeah so Uroboros over Uroboros because it's a recursive architecture so it's Uroboros over Uroboros over Uroboros over Uroboros so in a previous example that Dimitri explained we were communicating between two systems so two applications that are communicating just over the Ethernet layer so allocating a flow over this Ethernet layer of course we don't want to stop there we want to extend the scope over which we can communicate so we wrote a special application called a normal IPC process which also just uses the same API as a regular application and one of the main functions of this IPCP is to forward packets that it receives so that we can extend the scope to an internet level in the end so together these normal IPCPs in a normal layer provide IPC for applications so you can see that it's basically the same as the applications just over the Ethernet layer but now it's over the normal layer so the applications they don't care what layer they are using and I depicted the path here that's if the left application would talk to the right application this is the path to the network so it goes to the normal layer which uses the services of the Ethernet layer to go to the next IPCP in the normal layer which then uses the services of the Ethernet layer to reach the final IPCP there until it arrives at the application well I've drawn it like this but it's important to realize that the normal layer is using also the mechanism of flows in the Ethernet layer and that no kind of information is exchanged between the different layers so let's try a bit more difficult of an example so we can keep on adding layers to extend the scope so in this example we have three applications one on system one, one on system three one on system four and I've added some layers and let's say the leftmost application was to talk to the rightmost application then the path to the network would be the following it would be interesting to note that the middle application cannot talk to the left application it would need another IPCP in the topmost normal layer in order to be able to communicate it would be able to communicate with the rightmost application by just using the services provided by the second layer so the first normal layer in its system so within each normal layer that cooperate with each other to form the layer and they are all equal it's completely decentralized the architecture so there's no central component which also makes it more secure and scalable so the main objective of such a normal layer I already explained but this is shown as a top level view of a layer so the idea is that the IPCP can forward its packets to the destination so let's say the blue dots represents the endpoints of the flow and so the idea is that the packet gets forwarded to the destination so let's think back to how this happens in TCPIP so for TCPIP you would need to deploy a lot of services such as the HCP server for distributing addresses from central authority DNS is also not completely decentralized you would need the different routing protocols different pieces of software firewall stuff like that so in our system this is no longer needed the only thing you need is the IPCP that collaborates with the other IPCPs in the layer to provide IPC to its applications so then how do we construct such a layer well let's go through it step by step so of course the first IPCP that you create it needs to be bootstrapped so in the example we have two systems that are connected to each other via Ethernet so we have the Ethernet layer from before again which here we call E and on top we want to create a normal layer which is called N so there I've created the first IPCP so how does this so when you bootstrap an IPCP we use again the handy tool IRM that we developed so if you type IRM IPCP bootstrap it will just output the different things that you can configure since it is the first IPCP in the layer we need to configure it as we please so for now we well it's not super extensive our list yet but for instance you can select the routing policies that you want link states routing and loop re-alternate which is a bit more resilient but for instance the default is a link state algorithm so you can configure it as you please so depending on the operating on the environment that it is operating in and the scope that it should have for instance the address size if it's a very large network you want to pick a much bigger address size so now let's actually create one so again we start the IPC the IPC resource manager demon so the IRMD and then we simply execute a command which is similar to the one for creating the Ethernet layer but instead of Ethernets oh sorry yeah so first we create again the Ethernet IPCP because we want to use the Ethernet layer for constructing the layer so we just create the Ethernet IPCP at age 0 as demonstrated a couple of times by now so as we can see it has been created in the system then next we instantiate the actual normal IPCP and here we just selected it with default options and as you can see we created into the layer N we gave the name N1 and we also specified auto bind because of course we also need to bind the name so that it's reachable it needs to be bound and registered just as any other application but if you specify auto bind it will bind to it's unique name in this case N1 but it will also bind to the layer name N so that if you want to communicate with the layer that you can communicate with any IPCP that is a member of the layer so finally we register these two names into the Ethernet layer so that it's reachable so and rolling into a layer this is a the next step to extend the layer so we now have the bootstraps IPCP but of course we want to add more IPCPs into the layer so what this is is that a new IPCP that is not yet configured communicates with the member of the layer to authenticate with it to obtain the configuration and obtain an address in the layer so in the end we would end up with this very simple system of one normal layer on top of the Ethernet layer so continuing on with the example on the left side we see again system 1 which we just configured so we have the Ethernet layer with on top the normal IPCP and on the right side we just created an Ethernet IPCP so that we can use it on top so finally you execute IRAM IPCP Enroll which enrolls the new member with the existing member and as you can see it's a very simple operation so they just exchange the configuration it obtains an address as well as you can see there and in the end it is a member a new member of the layer then finally you will also register these names which is available if yet another IPCP member would like to join so once it is a member the next thing that you want to do is set up data transfer connections because becoming a member is just that so that you know how it is configured but you also want to set up some actual connections to forward data on so let's assume that we have this data transfer connectivity graph then you can see that every IPCP has an address again we have the end points of the flow and we want to get from the left IPCP to the top right IPCP so we just send packets in the layer and as you can see this is actually the full header so it's a lot shorter than IP and PCP we don't send source addresses so it's a lot more secure and anonymous the only thing that is needed is sending the destination address so that you know where it is going but you actually synchronize all the states on flow locations so when you allocate the flow you exchange information and then you generate an endpoint identifier so that you also have to send in your packet and a time to live value so in case you have routing problems this may be 6 bytes and that's even for a quite big network so how do you set up data transfer connections again with a very simple IRM command so you connect N1 to N2 for the data transfer component and when you do that as you can see it worked and it also of course also has a directory as Dimitri explained it's actually DHT in our case and when you set up the first data transfer connection it also enrolls into the directory and then apart from the data transfer network you can also set up a separate management network within the layer to disseminate routing information for instance as we use link state routing that you can disseminate information about the different links and it's just as you can see it's a tree so you can just set up the packet down the tree so the commands is very similar to the data transfer connection one so you just connect N2 to N1 so it doesn't matter because all IPCPs are equal so it doesn't matter if you do it from N2 to N1 or N1 to N2 and here we connect the management component and as you can see we can also add each other as a new neighbor in the management network so to summarize here for Auroboros over Auroboros so these are the different functions of the layer for Raptor, the Ethernet and UDP layer Dimitri explained that one and so in the case of a normal layer of course we implement this all ourselves but the flow allocation is completely Auroboros the routing is based on ISIS and addresses in ISIS you just run it directly over your sub-network technology the forwarding is also Auroboros and the directory is a DHT well Codemlia to be completely correct there and it does have an enrolled face and the legacy technologies well almost none of them have an enrolled face Wi-Fi has for instance that you have to enter your password to connect to the Wi-Fi network and you have to enter your password for of enrollment as well of joining the network so the reliability as we said it's between applications so since it's recursive architecture the IPCPs they're all just applications as well so it's one layer that is repeated but the function of reliability and flow control check summing it can also be repeated it doesn't have to be repeated for instance over each net you probably don't want to do retransmission between those two IPCPs but if you have a Wi-Fi layer you can get a lot of packet drops so it's probably interesting to do retransmission control there and then in the top layer for the applications the client and the server if they want a reliable connection they should also do retransmission so we presented the synchronous API with a lot of flows that you create you don't want to start a threat for every flow that you create so we also provide an AC Chronos API which is based actually on KQ in Linux you have E-Poll FreeBSD has KQ there are more performance versions of the SelectSyscall but when you read the research papers the KQ one seems to do a bit more well seems to be better implementation very simple so you just if you create a new flow then you just add the flow descriptor to the set and then you can just wait until one of the flow descriptors becomes ready and then you can read the packets that are stored in the FQ and do what you have to do so wrapping up so I'll briefly summarize what we explained so we explained a little bit about Uroboros so it's our research prototype based on this recursive internet model it provides you a very single abstraction for ways to communicate between two programs that are running on two machines it completely abstracts the network so this simplifies how you write distributed applications you saw some source code of course it goes we don't have a lot of time here so maybe a bit a little bit of information to process in a short time but the idea is that we have a very simple API we have a very simple command line so in all the configurations it's almost a zero configuration network there is no way that you have to worry about addresses ports when you are configuring servers it's a very secure and trustworthy network design and it hides all the complexity so if you look at a very abstract way for looking at it you have a client and a server we always send encrypted data between a client and a server of course if you don't want to you don't have to encrypt it but you can always do that everything that you send to the network destinations are registered as hashes so this is a function like DNS but there is no encrypted way for DNS so everything that you look up on the internet it goes to a DNS server and it's always encrypted so if you're your network operator if you're surfing for Google that's encrypted but then you go to the website the DNS lookup for the IP address is always unencrypted so the normal layer it doesn't contain source addresses so for somebody to try to analyze traffic and figure out where traffic is going it's a lot harder than current networks it's completely decentralized there is no single network that is a central entity so the way that we do DNS is that it's a DHT that's running everywhere in the network and the layers are completely self-contained so there is no information sharing between different layers in the network so before you start recompiling your kernel without a tcpist stack there's a lot of things to be done so this is a research prototype we still have a lot to do distributed address assignment currently we just give a random address in 64 bytes we need to look at efficient layer designs how to do efficient congestion control the implementation we need some bug fixing optimization we haven't implemented encryption so we plan to do that using the GNU Clipto library or OpenSSL and we have to deploy it wider we are looking for other people to start trying our stuff and that we can build it at larger and larger scale so because even the testbeds that we are using we can go up to hundreds of nodes but we can't go to thousands or millions which is eventually where we would like to go also the API is of course is proprietary so your software isn't written for the API so it's a very simple one so what we would like to do is have a preview later so that we pre-load our library against before you load the GNU C library and then we can trap your socket calls and run the software over so if you want of course run it over Uroboros so we are on three nodes the channel is Uroboros we have a mailing list and there's the website so please have a look at it if you think it's interesting what we've been doing in these last two years so we have to acknowledge that it's partly funded by the Flemish government so if you are not from Flanders this development has not been wasting your tax money we would like our colleagues that have already seen the presentation and gave us feedback because the previous one was probably incomprehensible even more than this one so we would like to thank our European and US project partners that were involved in the project that we were in in the research so for all these discussions our current and past master thesis students who have been involved in testing the software and deploying it and extending it and our supervisors for the opportunity that we have to work on this ambitious project so that's all thank you now we have five minutes a bit more we have eight minutes left for questions do I see any just a second you say that the source address isn't included in the packets but then how do you do two-way communication for those who are leaving please try to do so quietly thank you so if I got your question correctly is that we don't send the source address in the packet yeah so when you allocate the flow that's the first thing you do so you retrieve the name that you want to allocate the flow to from the directory you get the address and you form a flow allocation request so this is a packet that does contain the source address so that's it is sent to the end points which then communicates with the RMD and can see if the flow can get allocated or not and then a flow allocation response is sent back to the other side and just with these two messages they know each other's address and the endpoint identifier that they generated so to relate to something that you probably know if you know TCP you start with a three-way handshake where you send a OXIN act but actually you could do the same in TCP where you would add the three-way handshake you already negotiate the ports and the source address you store it at the endpoints and then you never have to send it again so it's a similar operation but it doesn't happen in the current networks Thanks for the presentation I'm wondering how the assignment of names on a global scale would work like for example how would we register FOSSTEM.org so the question was on registering names on a global scale so if you want to try to register FOSSTEM well so this is something that we don't have yet but you would actually need a naming service that maps the name to the layers that it is available in so indeed a global namespace for names would be required probably in the end you might have sort of a public internet layer just as we have right now that you can just then allocate flow to the name FOSSTEM Given your answer to the first question what made you choose flow routing versus packet routing for streams? sorry given your answer to the first question it seems that you chose flow routing instead of packet routing like you established a flow no the definition of flow is different than in for instance MPLS networks so to repeat the question the question was why we selected flow routing instead of packet routing so this is yeah we are so the answer is that we are doing packet routing so the flow is just the definition of state in the layer so that you have the endpoints but from that point on you are basically doing packet switch networking within each layer okay any other questions any further ones I don't see hands okay so thank you very much for the talk warm applause for our speakers for those of you who are leaving please look if there is any trash bottles or anything that you can take with you and transport outside that makes everything a bit happier and easier for the rest of us thank you