 So welcome to my talk today, NONET, building an internet testbed based on open source software. First, of course, I will give you the motivation of our testbed, particularly with the topics multi-homing and multi-pass transport. And then I come to some interesting details about our testbed setup, and particularly some details about the software, virtual machines, containers, and multi-homing networking. Well, first, let me introduce the motivation of our testbed infrastructure. Well, first, let me give an introduction of how classic internet communication works. Is the audio OK? I hear some echo. It's OK? OK. Well, classic internet communication, as example, is, for example, accessing the world-wide web. So you know, when you use your web browser, it will make requests to a web server for certain objects. Web server will deliver these objects, and finally, the browser can display the result. So in this case, we have a very classic client server communication. It is browser device as well as web server device. Have each one one IPv4 address. And the communication is made over the transmission control protocol TCP on the transport layer. This is the way the internet works. Since many years, it will also work in the future. But of course, in the future, we will have a lot of new applications. For example, internet of things, smart cities, virtualization, sensor networks, and so on and so on. And these applications, of course, have some stronger requirements on the communication. For example, nowadays, and at least in the future, we will have widespread support for IPv6. So usually, today, devices are already IPv4 IPv6 dual stack. So usually, therefore, you have multiple IP addresses per network interface. And of course, today, we have ubiquitous mobility. Everybody probably has such a smartphone. And you know, from your smartphone, when you use it here in the room, you will have a certain IP address from the wireless system, at least outside of the hotel. You will get another IP address from your mobile broadband provider. And of course, if you start a download here in the room and go outside, your download will be interrupted because the IP address is changed and TCP cannot handle this. And also, there are frequently devices with multiple network interfaces. Also, your smartphone is, of course, such a device, as for example, mobile broadband, LTE, UMTS, wireless LAN, Bluetooth, and so on, and your laptop computer. So classically, of course, there had already been devices with multiple network interfaces, routers, of course. But nowadays, your device in your pocket is such a device as well. So what can you do when you have multiple interfaces, multiple IP addresses? Well, you can make use of multi-homing. As you can make use of redundancy. So if you have at least one working pair of IP addresses between a local device and a remote device, you can still communicate. So you have redundancy. So if you rely on internet connectivity, well, you can just connect to multiple internet service providers simultaneously. So if one ISP has a problem, you can still communicate to the other ISP. Well, it's a very nice thing, of course. But look at this user of multi-homing. He looks not that satisfied. So what's the problem with multi-homing and redundancy? Well, if you have two internet service provider connections, of course, each month you will receive two internet bills. So redundancy is expensive. And of course, usually you only appreciate redundancy when something goes wrong. In the usual case, nothing goes wrong. So you just see your internet bills and you'll see that you pay double the amount. So what can you do when you have multiple ISP connections? Well, you can utilize multiple connections simultaneously to get a better support for your downloads. Or you can optimize the layer or something like this. You can use this with multi-pass transport. Obviously, TCP protocol, quite old protocol, it does not support multi-pass transport. But with an extension, it's possible. It's multi-pass TCP, MP TCP. Or if you're using the stream control transmission protocol, SCTP instead of TCP, it's also possible. With an extension called concurrent multi-pass transfer for SCTP, CMT-SCTP. In any case, CMT-SCTP or MP-TCP, you get multi-pass transport. And since multiple IP addresses, multiple connections are more and more common today. So it's a quite hot topic in standardization in the ITF, and of course, also in research. So it makes sense to do some research to optimize it. So therefore, I come to the topic multi-pass transport. Let me very shortly introduce it, how it works with MP-TCP or CMT-SCTP. We have an application that should make use of multi-pass transport on the application layer. Obviously, we need on the transport layer, transport protocol that is capable of multi-pass transport is either MP-TCP or CMT-SCTP. And on the network layer below, we have IPv4. And nowadays, also IPv6, usually both simultaneously as dual stack. Now we need pass, which are denoted as sub-flows in the terminology of MP-TCP. This pass or sub-flows can go over IPv4, can go over IPv6, which is the same transport connection you can even mix IPv4 and IPv6 pass. So you can use both protocols simultaneously. So that's already stride forward. But what's still missing is of course, an instance to distribute data onto all the possible sub-flows. It's called scheduler. This is already the principle of multi-pass TCP or CMT-SCTP. But if you have a somewhat closer look into the details, of course, you will see some challenges. Fairness and transport. Obviously, you imagine pass or sub-flows as being disjoint inside the internet. But you cannot guarantee that all paths are disjoint in the whole network. So paths may overlap. And if they overlap on a shared bottleneck, of course, this causes a fairness issue. So if you have two paths sharing the same bottleneck with a concurrent TCP flow, then your multi-pass TCP flow will get double the bandwidth of the concurrent TCP flow, which is, of course, unfair. And another challenge is there in multi-pass transport. It is scheduling. So probably you consider a path of being of quite equal quality of service properties, that is bandwidth, delay, shitter, loss rate. But from your mobile phone, you already know when you use wireless LAN, you usually have a high bandwidth and low latency compared to a mobile broadband. But on the other hand, on mobile broadband, you will rarely see packet losses. So if you combine a sub-flow over wireless LAN and a sub-flow over mobile broadband, you have two very dissimilar paths in the network. So scheduling becomes a challenge as well for multi-pass transport. Before I come to more details about the testbed, first, let me shortly introduce how you can use multi-pass transport, particularly with open-source software. Well, of course, you can use a stream-controlled transmission protocol, which is already some years old. It's standardized in RFC 4960. And it features multi-homing as well as multi-streaming. You can even multiplex multiple streams over a single transport connection, avoiding head-of-line blocking, which is a very nice feature. So actually, SITP, together with many protocol extensions, has a lot of very nice and advanced transport features. And the very nice thing about SITP is it works out of the box in Linux. So your standard mainline Linux kernel supports SITP and all the main Linux distributions provide the SITP module. So it works out of the box in your applications if you use SITP. Unfortunately, the Linux implementation of SITP is not really state-of-the-art. So it lacks, unfortunately, some many features, particularly including CMT SITP, that is the feature for multi-pass transport. So actually, it's, of course, an idea, maybe for some student projects to do some enhancements of the SITP implementation in Linux. Actually, I did this with some students some time before, but maybe, particularly, CMT SITP is probably a nice master student project. Unfortunately, at the moment, we have no CMT SITP in Linux, but if you want to use CMT SITP on the open-source operating system, of course, you can use FreeBSD, which has state-of-the-art implementation in SITP. And particularly, it has a very active community maintaining this implementation, so it's absolutely state-of-the-art, and particularly also the reference implementation for the ITF people standardizing SITP. Well, at least you can use it as open-source. The other end, you can use MPTCP, which is also standardized as RFC, featuring multi-homing and multi-pass transport. So you get both. And a really nice feature of MPTCP is it's somewhat backwards compatible to TCP. So an MPTCP endpoint can, without any problems, communicate with an old TCP implementation. Of course, you will not have multi-pass transport in this case, but at least the back-of-the-house compatibility with it. And particularly, you can communicate with MPTCP over existing middle-box devices like firewalls or network address installation, port address installation routers. So this is very nice because you have many of these devices deployed in current networks, and for MPTCP, usually you do not have to change anything in the network. SITP on the other end is, of course, a completely new protocol, so it needs to be supported in firewalls, for example. And the other nice thing about MPTCP is you get a very nice implementation under Linux from University Catholic de la Wendendorf in Belgium. It's a quite nice and active community. Unfortunately, you need to compile your own kernel. It's still not in the mainline kernel, hopefully in the future. And of course, if you want to use FreeBSD instead, it's also possible. Swinburne University in Melbourne has written an implementation of MPTCP for FreeBSD. So this is a Linux conference, so probably your question is now, how can I use MPTCP under Linux? First, you need to think a little bit about the routing. So when you have connected a computer to the internet, usually you have an Ethernet interface, for example, and configured, if you get a second ISP, you can connect it, for example, to the second Ethernet interface, to the configuration. Now we expect to be able to communicate over multiple networks, but it will not work that way. So because if we set a default route, via the first ISP over ETH0, all the traffic goes over the ETH0 interface, not over ETH1 as well. So this does not work, but fortunately, Linux has all the features necessary to also solve this issue very easily. So called routing rules. So what is necessary in this scenario is that you have a selector, which interface to use for which packet, based on the source IP address. So if you have the source IP address set to the ETH0's IP address, go over ETH0, otherwise with the other address on ETH0, ETH1, go over ETH1. And this can be quite easily configured on Linux. So let me give a more complete example. We have first device ETH0, give it an IP address and set a default route. So now the computer can communicate with the internet, at least over ETH0. But what we do now is we create an additional routing table called table number one, and set the route for the local network as well as a default route into table one. The same we do for the interface ETH1, set the IP address, set the route for the local network as well as the route for the router, default route into table number two. Now in total we have three different routing tables. So how to get the right table for a packet that should be routed? Well, we need a selector based on an IP rule. This we create an IP rule. If a packet comes from the first IP address, use table number one. Next rule is the packet comes from the second IP address, use table number two. And if you find the numbers of the table somewhat intuitive, you can even give tables a name. For example, if you are connected to a DfN, doshes, foshoms, nets, and telecom, call it DfN for table number one, and telecom for table number two. But it's just a convenience function. Internally, of course, all tables have numbers. How can you now have a look at the full configuration that has been created? Well, there is an IP rule show command showing you all the IP rules, particularly showing you, of course, the rules for the first network and the second network we have created for table number one and table number two. And as I've said, it's possible to do name mappings for such table numbers. And in this case, for example, table main is defined as 254. And you could also define other table names. But it's just for enhancing the readability of the output of the IP rule show command. Internally, as I've said, all tables are just enumerated. And of course, you can use the IP root show command to show table number one, table number two, and also table main or 254 in numeric notation. Well, now everything is there for IPv4. But nowadays, also we have IPv6. You can do absolutely the same thing also for IPv6, just that you have to set the option minus six for the IP command, but otherwise the commands are exactly the same, same syntax. And what you have done now is, of course, you have created a network configuration for both network interfaces. And now, MPTCP, or in fact, all the CMT-SATP, would be able to choose the right outgoing ISP for a packet just by setting the right source address into the IP packet, which is sent out. That is, particularly, also the same for IPv6. And as I've said, you can even combine IPv4 and IPv6 subflows in the same multi-pass TCP or CMT-SATP connection. For this, now everything is ready for testing multi-pass TCP. So we need to download kernel sources from University of Katolektrola-109. It's in fact a GitHub repository. And once you have configured, compiled, and booted your own kernel, all TCP connections will, by default, be multi-pass TCP capable. This, if the remote endpoint is also multi-pass TCP capable, you will have a multi-pass TCP connection. Both endpoints will negotiate all the available IP addresses and establish subflows. So you need to check just that the routes are okay with the IP rules to make it work. And then you can use common test tools like Wireshark, Tshark to observe the packet flow. Or particularly, if you want to do some performance experiments, particularly a very nice test tool is Netperfmeter. Actually, it's written by myself. And you can download it from also my web page, as well as contributed to deep-in Linux, as well as you want to Linux. Well, that's all necessary to get multi-pass TCP running inside Linux systems. But of course, now you probably want to do some larger scale tests in the real internet. Actually, this is what we also intended in the past, when we did some research at the University of Diesburg-Essen. But we started small initially, that is. We first created a lab setup to have some systematic environment to do research on multi-pass transport, beginning with CMT-STP under FreeBSD, or later also, of course, multi-pass TCP. As we asked our network estimator to give us some network cards, some cables, some switches, some computers, network equipment, and we put everything together. And the result was it was not working as expected. Well, actually, we had a very large and diverse combination of network cards, switches and so on. And actually, it was of course quite cheap, but it was only quite cheap on the paper. Actually, some issues showing up were that, for example, network cards in certain combinations had some strange issues, for example, reducing the speed from 100 megabit per second to 10 megabit per second, just for a few milliseconds to get up to 100 megabit per second again because of some strange implementations or whatever. But actually, these strange issues of the hardware, of course, triggered a lot of bugs inside the HTTP implementation of FreeBSD. But finally, all the effort invested in this setup, which took quite a long time, particularly due to debugging. It was quite worth the effort because we were able to show that prior simulations of such an environment were useful. It was actually intention to set up this environment. And due to all the bug hunting and bug fixing necessary, of course, we were able to give a lot of useful bug fixes back to the FreeBSD HTTP community. And as part of this work, I also wrote a network performance testing application network meta, which is open source. Well, actually, this very small setup had already had resulted in a lot of ideas and learning effects. And particularly, of course, one of the basic ideas was well, we are doing research on internet protocols. But this is, of course, very small intranet testbed. So let's go to the real internet. So I motivated my colleagues at the University of Duisburg-Essen, where I worked. My colleagues at Fachhochschule Münster in the city of Buchsteinfurt, here in Germany, as well as my colleagues at Hainan University in the city of Haiku, in the Hainan province of China, to make similar testbed setups and connect everything over the internet with two different internet service providers at each site. So we had then very small, but very nice intercontinental testbed setup already with three sites. To do interesting research, initially on CMT-STP and the FreeBSD, but later particularly also for multi-pass TCP under Linux. And particularly, due to the connection to China, we had very different pass characteristics which were quite challenging. Actually, this small setup has led to a lot of more new ideas. And particularly, one idea was, of course, we have a three-site testbed, which is quite small. Can we make it larger? Of course, larger testbeds require sufficient funding. But fortunately, there was similar research ability having very similar ideas, creating a large testbed for multi-homic systems. So I got in contact with them. They hired me as research engineer. I went to Norway to build up this new testbed called Nunet Testbed, consisting of two parts, Nunet Core, which is the landline part, as well as Nunet Edge, the wireless part. Nunet Core consists of such site setups. In total, we have 21 sites, 11 in Norway, 10 abroad. And at least the Norwegian sites have a setup of four x8664 servers. The other sites abroad are contributed by researchers and have some, usually some smaller setups or even fully virtualized, giving a very nice opportunity to do large-scale internet research on connectivity. And of course, as I've said, mobile networks, mobile devices are becoming more and more widespread. So in fact, you have mobile devices everywhere. So obviously, a very important part of the testbed is also a wireless testbed called Nunet Edge. Obviously, we need hundreds of locations distributed all over Norway. So we cannot distribute hundreds or even thousands of such server setups to hundreds of sites. So we had to shrink everything down to a custom-made embedded system called UFO Board, running Debian Linux, and then connected to a mobile broadband network or others all over Norway. We have about 250 of these devices shipped all over the country of Norway, which has a size somewhat larger than Germany. So we have a very nice coverage. Unfortunately, I cannot go into too much details about Nunet Edge because of time restrictions. Therefore, I have to restrict to the wired part, Nunet Core, wired part of Nunet Core. Of course, as I've said, has 21 sites. 11 in Norway, as well as 10 abroad, particularly four in Germany, as well as two in China, for example. And we have therefore a quite large distribution all over the world from the city of Longerbyn at University Post-Wollbad. It's on the island of Spitzbergen, just about 1,200 kilometers away from the North Pole, so very northern location, as well as in the South, for example, we have a site at Nikta in Sydney and of course sites in the Hainan province of China and Southern China. That was a very nice distribution. And particularly, of course, we have sites connected to up to four different internet service providers, usually the research network, as well as additional commercial ISPs. Particularly, some of these connections are, for example, ADSL connections. So if we get a very diverse connection set up, for example, also making it possible to evaluate performance that is observed by a real home network user over an ADSL connection, not only high-speed university networks, for example. And interesting to mention also is, of course, about two-third of our connections are already IPv6 capable. So we have IPv4 and IPv6 as well. And I cannot do into too much details here, but what I want to show you is some small visualization of the structure of the networks. It's the autonomous systems between all the different 21 sites. So as you can see, it's somewhat preliminary, but at least you can see that there is quite diverse set up of different autonomous systems. And particularly, for example, communications between China and Germany or Norway, you can go westwards or the United States. You can go eastwards or Eurasia as well as Dubai. And this direction of the data, it can even change over time or day. And particularly, it can differ between different ISPs. So quite interesting and challenging setup. And particularly, then it comes to remote setups. Well, we have remote systems distributed all over the world. And important to note is, of course, our remote servers are not only remote servers, they are really located at remote locations. And actually, as I've said, we have a setup in the city of Longyearbyen on Spitz-Bergen Island. So this is the road to Longyearbyen on a very nice spring day. So temperature, when I took this photo, was just about minus 30 degree Celsius. Or in Fahrenheit, it's minus 22. So in note that it can get quite cold there, particularly if it is not such a nice spring day, but winter day. So in winter is, of course, for some months completely dark. And cold temperatures are not the only challenge of this side setup. Also on the Spitz-Bergen Island, there are wild ISPs roaming around. So it's actually quite dangerous to be in this area. And for non-local people, it's always the requirement to have a local guide with a gun attending you because to protect you from the ISPs, which can appear. And you see, it's quite a challenge to go to such sites. And so it's necessary to think about what can you do when something goes wrong. And you are computer scientists, so you know Murphy's Law. Anything that can go wrong will go wrong. And well, our experimental test pad is, well, using experimental software. It's the intention of the test pad. So things can go wrong with experimental software. So how can we avoid that if something goes wrong, it brings down, for example, a whole site, making a visit at a site thousands of kilometers away necessary. Well, we use virtualization. It's the physical hardware. We have gets actually a quite very lightweight installation of Ubuntu Server Linux. And everything more complicated and more experimental runs inside KVM-based virtual machines, particularly routing functionality as well as, of course, experiments of the test pad users. That is, if something goes wrong, and there will things go wrong due to Murphy's Law, we can just log into the physical machine and fix the virtual machines by placing them or reinstalling them. So that is, let me go through a few more details about the physical machine setup. As I've said, we use Ubuntu Server Linux because it has long-term support, five-year support period, which is quite nice thing. And therefore, we decided to use Ubuntu Server LTS, but we did some customization and one customization was the file system used. By default, Ubuntu Server uses X4 file system. But actually, the performance of X4 is not very good and particularly is not very resilient. So frequently when working with X4, we notice the problem. Somebody had turned off a machine or the power was lost. The machine came up and requested a manual file system check. Of course, no problem when the machine is in your office, but when the machine is thousands of kilometers away, it is a problem. So the resilience of X4 was considered quite awful. So we used Ryza FS, therefore. Actually, maybe a third of Ryza FS was actually the default for many Linux distributions years ago. Unfortunately, the main author of this file system, Hans Reiser, has killed his ex-wife, he is now in jail. This has then created some discussion about the file system. But actually, his file system never killed my data. So it is actually very resilient and a very good file system. So therefore, we use it for our machines. Of course, we have also considered other file system options, particularly BTRFS, for testing purposes installed BTRFS on one of the machines. After a few days, I just replaced it again by Ryza FS because the performance for hosting virtual machines is actually quite awful. So that's probably for the developers still a lot of work to do. Therefore, Ryza FS. Actually, what I would like to test would be very nice, probably, to test Ryza FS4, which is, of course, not in a mainline kernel. Therefore, it was considered too much maintenance effort to test this Ryza FS4 or ZFS, which is also now provided by Ubuntu. Maybe some future work can test other file systems. So at the moment, we still stay with Ryza FS because it has shown to be quite resilient, quite good. And file system is one customization we made with this system. The others, of course, using virtual machines. Initially, we ran everything based on virtual box. At the time, we made initial installation of the test, but this was quite good choice because we had experience with virtual box, also similar research at some corporations with Oracle going on. But unfortunately, virtual box has some annoyance. So Oracle wants the users to install a binary block module providing features like, for example, VNC. There's, of course, VNC support and open source available, but you have to compile your own package. So it was necessary to manage an own virtual box package, which, of course, took a lot of time. So finally, I replaced everything by KVM, therefore. Well, physical machine set up was one thing. The other interesting thing is the router, so-called tunnel box. Tunnel box, the name, because it establishes tunnels between all sides of the no-net core testbed. Why do we use tunnels? Because it's easy to get one IP address, public IP address, per ISP per site. Obviously, we need such an address, but it's very difficult, of course, to get dozens or even hundreds of IP addresses per site, because IP before address space is exhausted. So we use tunneling between the sides, between the routers, and apply our own addressing scheme inside the tunnels. Therefore, we have, of course, plenty of addresses available for IP before. Of course, we use private addresses, but with the tunneling based on GIE over IPv4, as well as IPv6 over IPv6, we can use a very systematic address allocation, just using separate tools for provider, site, and node, and the similar scheme for IPv6 as well. And so we have the flexibility of doing address allocations for research experiments at the sites. And with these addresses, of course, direct communication is possible between the sites over the tunnels. But the question is, of course, how is it possible to go to the outside internet? Well, for this, if a user needs communication with the outside internet, for example, for downloading packages, or updating Git repository or whatever, then everything is routed over the similar site. This gives some security, because we can do some logging, but also it avoids legal issues, particularly in Germany, maybe if you are familiar with the German system, it has a thing called a Mitstörerhaftung. In fact, it says the person who is subscribing to an internet service provider is responsible for the IP address. So if some lawyer claims the person has downloaded some mp3 file, and the person has to prove innocence or pay a lot of money to the lawyer. So in fact, there's some thing for lawyers to get a lot of money from normal internet users. So therefore, in Germany, it's very difficult to get, for example, access to open wireless lands. And to avoid such issues, when somebody, for example, in Germany is hosting a known core site, we just route all the traffic to the similar site over the research network in Norway, then it goes out with a Norwegian IP address. Norway does not have such problems like Mitstörerhaftung in Germany. So this probably misavoided. And some more details about the router. We'll go very shortly over this because we have not so much time left. Well, the router also provides DNS infrastructure because handling IP addresses can be somewhat difficult. And remembering all the details about the system-academic address allocation is easy for a developer, of course, but maybe not so much for a normal test-by-user. Therefore, we have established our own private top-level domain.nunet inside the DNS working on the routers. And with this, of course, it's possible to very easily allocate also names to the nodes. So you can very systematically give the nodes useful names. Actually, the idea was giving all nodes different names, particularly with names nearby the location. For example, this is part of a city of Essen where this site is located. So it is then very easy to remember where a node, to which site a node belongs to, is released for the developer. And also, the DNS setup can do some nice thing like IDN automatically. Particularly interesting, of course, for nodes in China. But there's just some nice feature. More convenient function of the DNS is, of course, also to provide more information about devices and site locations. Particularly for all IP addresses inside our internal DNS, we have the exact geolocation provided to SLUC resource records. And since in most devices possible to log in by SSH, also a very nice feature is to use SSHFP resource records providing SSH public key fingerprints of these devices entirely inside DNS, so it's not necessary to manually manage such list. It's quite convenient thing. The other convenient thing provided by the router is, of course, also to run the SCWIT HTTP proxy because at the sites, there are a lot of containers hosted. I'll come to this later. And the containers have very similar installations of Fedora Core Linux. And obviously, if you update some packages, you will do this in multiple containers. So instead of downloading everything multiple times over the same internet connection, it's nice to have some HTTP cache for caching such packages, for example, to avoid that too much bandwidth is wasted for, for example, package downloads. Let me now come to the research node software. This research node are the virtual machines running actually the tests and experiments of the users. Well, and the usual research node is actually a quite small KVM-based virtual machine managed by the PlanetApp Central software, PLC software. Maybe you're already head of PlanetApp. PlanetApp is a large internet aspect and they have actually a quite nice software for managing research experiment. This PlanetApp Central software. And we actually do not reinvent the wheel again with our test, but that we're using the PlanetApp Central software with some extensions for NONET Core, of course, giving you access to multi-pass TCP and multi-homing with multiple internet service providers, of course. Yeah, and these research nodes are KVM-based virtual machines running, in fact, a Fedora Core 23 Linux. Pass with a customization is, of course, with a multi-pass TCP capable kernel using the multi-pass TCP implementation for Linux from University Catholic Delivered on Earth in Belgium. And these research nodes host then the user experiments. And the user experiments run in LXC containers. They are denoted as slivers in the terminology of the PLC software. And in fact, each LXC container or sliver has its own Fedora Core 23 installation, pre-installed with some nice set of development tools, T-Shark and so on. So usually the user of the testbed will find most software already out of the box inside these containers, so quite nice. And of course, each sliver will have exclusive own IP addresses, IP4 as well as IPv6, as one address per ISP of the site. And of course, the user has root permission inside such containers. So the user can just install arbitrary software inside and do most configuration. And obviously, such a research node should host maybe 10 to 12 different experiments. So, of course, giving each user a complete copy of Fedora Core installation would consume a lot of this space. And here you make use of a very nice feature of PTIFS, PTIFS snapshots. So in fact, of giving each user a complete copy of all the files installed in our pre-installed Fedora system, well, we create one template as snapshot and just clone it for all the containers. So in fact, if the user does not change the files, they will not be there in this is the many times. Instead, it is using copy on write. So only if a user modifies a file, then physically a copy will be created. So actually, PTIFS here inside the research nodes is a very, very nice thing and saves a lot of space. So containers make it very easy to host many research experiments at the site because they are very lightweight. So therefore, they are popular, of course. But in some cases, of course, it can be useful to instantiate full virtual machines. And also, this is possible in our testbed, but currently, this still has to be done manually because a panel app central software does not support full virtual machines. So in the future, therefore, we have some plans for future enhancements in the direction of using OpenStack and more advanced tools for managing full virtual machines. But I come to this later. First, let me finally give you a very short overview of what you can expect from a container or so-called sliver in the terminology. Well, this is what the user will get at a remote site at a research node. This user can log in by this edge. And actually, when logging in, the user gets a very nice overview of all the details about this container, just for convenience. Particularly, what you can find here is the user gets one IP address per ISP of the site. Actually, this site is located at Hainan University in the city of Haiku in the Hainan province of China. It's connected to two different ISPs. Therefore, you have two different IPv4 addresses here. And obviously, since IPv6 is also support by testbed, you will also have IPv6 addresses. And as I've said, it's possible to just look at the IP address to get information about ISPs. For example, 80 stands for China Education and Research Network, 81 for China Unicom. But this is just a detail. So if you are a user of the testbed, you can also just use information from DNS to look up information about the ISPs, for example. Just to mention, what our user also gets is, of course, a kernel with MPTCP. So in this setup, MPTCP will work out of the box. No need to set up IP rules. This is already made by the router, the tunnel box. So you can make use of MPTCP directly with your containers. That's a very nice thing. And for security, I display here also the public key fingerprints of the SSH server because usually you log in by SSH. And in fact, it would be a nice thing if the user, when accepting an SSH public key also checks the fingerprints. Hopefully, the users don't do. At least, I encourage this. Of course, then the user gets this container with a nice pre-installation of software. So in the usual case, the user can hardly compile his own software there. Or do DNF install because it's a Fedora system. So the package management system is DNF, give install, to install custom software from Fedora repositories, or run software requiring SuperUser privilege, like TCP dump, or more advanced tool, T-shark, from the Wireshark project. See, this is quite convenient for the user. Well, so therefore, I come to the conclusion. Well, nowadays, we have more and more devices with support for multi-homing. So you have, for example, your mobile phone or many other devices having multiple network interfaces. And maybe devices even connected to multiple internet service providers simultaneously. So it makes a lot of sense to think about advanced features like multi-pass transfer to make use of this capability. So therefore, it makes sense to have a very nice test bed, what we have with NoNet, a very large scale, or quite large scale, a test bed infrastructure inside the internet and the real network internet, including ISP connections, which are consumer grade, like, for example, DSL connections. So not only researched networks. And particularly, we do not only rely very much on open source software. So all NoNet software is, of course, open source. And we tried to make use of open source software projects wherever possible. NoNet Core itself is also an open test bed. That is, if you have an idea to make use of a test bed, test your own applications, do performance evaluations, or whatever, or use it for research, just ask. And then we can make it possible. So actually, the idea of operating NoNet Core is to work somewhat like PlanetLab, is if you want to use a test bed, you contribute a little bit to the test bed. For example, making an own small setup at your own side. You do not have to contribute for full servers. You can even use just everything, for example, the VMware cluster or KVM cluster or whatever, virtualize everything, or have one older machine. It's no problem, because we do not require very high performance. And once you contribute something to the network, of course, you can make use of the resources at the other side. So if you have ideas on using test bed, just ask. So what we plan for the future of NoNet Core is, of course, to extend it more in the scope beyond multi-pass transport. Currently, our focus is very much on things like, for example, multi-pass TCP, multi-homing, redundancy, resilience, and so on. But particularly, what we intend to do in a very near future is to go more in the direction of network function virtualization and software-defined networking. Particularly, we want to build in support for things like Docker OpenStack. Also, we want to go more into the direction of cloud computing and applications. So we have, in fact, a small but quite widely distributed cloud system already. We want to do some more extensions in the direction, for example, OpenStack. And actually, we've got already some funding for this purpose. Unfortunately, my time is almost over now. So I can therefore refer you to our website, www.nntb.no. You can find a lot of more details about the Nuanet Testbed project, including a lot of applications, as well as, of course, links to the open source Git repositories of the project. And therefore, I'm at the end of my talk. Thank you for your attention. Hope you have enjoyed this presentation today. And now we have probably a little bit of time for a few questions before the social event. Actually, for the social event, the first bus leaves at 6 o'clock. But there are further buses. So are there any questions? It's also possible, of course, to use it for other purposes. But particularly, we had a strong focus on multipass TCP. Particularly for this purpose, we get multipass TCP running out of the box inside infrastructure. But if you have other ideas, of course, it's also possible to run them inside infrastructure. Particularly, what Nuanet can offer you is that we have a very large distribution all over the world. That's some focus on Norway, of course. But we have sites on four continents in seven different countries. And particularly, we have also ICP connections being very representative for normal internet users. For example, we have a couple of ADSL connections in different countries. So if you have some software that's intended to run, for example, on home user PCs, you can make use of these ADSL connections to evaluate performance of your software, for example. Particularly, of course, you can also have a look at our website, which has some links to projects running on the testbed. It's not 100% complete, unfortunately. It needs some updates. But you can get a nice overview as well. Any more questions? Yes? Pardon? It's, of course, possible to run full FreeBSD virtual machines inside infrastructure. And we actually have some FreeBSD virtual machines running on some of the sites, yes. Any more questions? To mention about FreeBSD, of course, is still necessary because it's a full virtual machine to do some manual setup of these virtual machines. So this is particularly a thing you want to improve in the future, to have management infrastructure to automate this, of course. Because it is not supported by the PlanetLab central software. PlanetLab central software just manages the containers. But of course, in the future, we want to announce this to make it much easier to use custom operating systems. Any more questions? And thanks for your attention.