 We're here today at the one-day seminar on Scientific Network Management for Cloud Computing. So the focus is very much on telecom networks and how they do or don't deliver quality and how they can be managed. So we're here partly as proposes of this kind of delta Q approach for how to reason about networks and so forth which we're applying, we're applying within the Cardano project. Partly also to talk about how we are applying that within Cardano and also I personally have been talking about how we can emulate the quality effective networks and network links and how that can be, how you can assess how that impacts on applications. So that's something we'll also be applying within Cardano where you can use that to model the effect of the kind of global distribution of nodes even within a single kind of Amazon zone so that we can run tests perhaps more easily and certainly be able to experiment with what are the impacts if some route suddenly goes down between one part of the world or another and what if we have available to us a separate private connection or you know there's a whole series of scenarios that we can then explore and see how the operation of the Cardano protocols and the whole system that we're developing can exploit or react to different circumstances in the network so we'll be using that all that capability also within the project. Okay so we've talked a lot about DeltaQ as a network measure, Neil's talked about how it relates to the delivery of a specific application outcome in his case the diffusion of a block around a global network. This still leaves open the question of how for any particular application we might be able to figure out what is the acceptable region of operation right at the beginning peak cladding bowl put up in our sense metaphorical picture of boundaries of acceptable temperature and humidity and with by analogy with that we have for any particular application boundaries of acceptable loss and delay and it's clear that they must always be there for any application you choose to pick. I maintain that I could find a network performance bad enough that it will cease to function. Anyone wants to take up a bet on that we'll talk about that later. So there's always going to be some bound and the question is what is that and how can we find out what that is because if we know what that is that then feeds into this notion of QTAs so we can start coming up with these real-time contracts about what it is that we need the network to do in order to be confident that our application stands at a very high chance of performing acceptably. That's what we'd like to do. So there are a number of ways of doing that and Neil was alluding to what we're doing in the Cardano project with process algebras and so forth. That's a kind of bottom-up constructive way of saying for this application that we know in great detail and have all the source code for and an algebraic model of we can actually calculate how the delta Q of the network part interacts with all the processing and how that affects the ultimate outcome. In many cases we may not be in a position to do that. The application may be something that we didn't write, we don't control it, but maybe we can do some testing of it. So we did a project that was called Overture that was partly funded by Innovate UK. Thank you very much UK government and so what that has produced is a test bed for doing this kind of analysis. I'm just going to quickly talk about it. Okay so I think we've heard all this already but just to recap applications are distributed computations. So if you're not just running something locally on your own PC or phone it involves exchanging information across the network. So components of the application, typically a client and a server but there could be more bits to it, have to exchange information. All information is delayed by the network so this is the notion that Martin talked about at the beginning about network friction. We can't eliminate all together. At the very least we have to be concerned about the speed of light. So delay is the price we pay for being distributed. That's the payoff. We decided it was nice to build these applications with one bit in the data center over here and another bit on a phone or a PC over there and there are lots of very good reasons for doing things that way. We get economies of scale and so forth but we do pay the price in delay. Loss is the price we pay for using statistically shared infrastructure. We moved away from the days when anybody wanted to move information between bits of an application bought a circuit. Circuits were nice, their performance is extremely predictable and they cost the earth. The whole invention of packet switched networks, frame relay, all of that whole evolution has been to find clever ways to share the underlying infrastructure between lots of applications and lots of users which makes it cheaper for everybody. We're not trying to centrally orchestrate every movement of every bit of information. We just let stuff onto the network and they just fight it out. Most of the time networks and occasionally you get unlucky, as Martin talked about in the beginning, these networks are a kind of game of chance and so it's a statistically shared system and sometimes you lose that bet and then you might get some packet loss. The characteristics of that delay and loss affect the application performance and actually when you think about it, that's the only thing that affects the application performance. From the point of view of the application, it sticks packets into the network. It knows nothing about what happens next. Sometime later they pop out in the other bit of the application or they get lost in between. That's all it is. To an application, a network is a damn thing which delays its packets and loses some of them. That's what it is. That's the thing that impacts its performance and then we're just talking about that in relation to an aspect of the way blockchains work. The user experience is becoming more and more dependent on network quality characteristics, as we were saying in the beginning. The issues of capacity, and Gavin also said, the issues of capacity are increasingly solved. The issues of quality are increasingly not solved and actually that's partly because in building these networks, we've made all kinds of engineering trade-offs to get more capacity and in many cases we've sacrificed quality in order to get it. That's something we need to be concerned about. Network capacity is no longer the limiting factor for many applications. Actually, from the point of view of delta Q, you may think we've forgotten capacity altogether. No, no. Capacity is a factor. Capacity is the point at which it's the level of load that I can apply before which my delta Q gets really, really bad. If I have a capacity of 10 megabits and I put in 12 megabits of traffic, I will lose 20% of my traffic. My delta Q will be really bad, but that's what the capacity is. Here's a test bed. Basically what we're doing is we're emulating the performance of a network. There are lots of network emulators out there. The difference between this one is this actually emulates delta Q. Remember we talked about delta Q being the probability distribution of delay and loss. We can emulate that and we emulate it separately in the upstream and downstream direction. We can do it upstream and downstream. We can do it between the server and the client. We can assess the application outcome. On the way, we can also measure the load which is being applied. We haven't talked a lot about load and resource consumption, but this is actually the other side of the equation. To build networks that are efficient and reliable, you actually need to have some understanding of the load that you need to deal with. Nobody can deliver a good quality with an unbounded load. You have to know what the bound on the load is. This is something we could do. Gavin told us earlier about there's a whole raft of people out there who are developing automated metrics for quality of video and all kinds of stuff. You could actually just plug one of those in here and say, right, okay, I'll now twiddle my delta Q knob. I'm actually a two-dimensional, multi-dimensional knob, so it's not quite that simple. I used my hopefully automated measure of assessing the application outcome and I find what that region is, the predictable region of operation, the boundaries of what I can accept and now I know what the network needs to deliver to make my application work well. So far so good. We can take this further. Oh, sorry, reproduced. Yes, okay, I've said all that. Right. Okay, so does this work? Well, okay, so here's an example. Here's one we prepared earlier. So, good old speed test, right? This is your measure of gross network performance is a speed test. So, here's what, here was a test we did on a VDSL line, actually also going over Wi-Fi. You can see here are the results. So, that wasn't bad. Okay, here's a DSL line to the same location. This happens to be PN Sol headquarters in DPS Somerset, which has good connectivity that it can get. So, as well as having a VDSL on it has a DSL line. Obviously, you can see that performance is less, the speed is less, ping time's a bit worse. Okay, so what we then did is we took the VDSL line and we attached our testbed to it and configured it to emulate the DSL line. The Euclust speed test server is still up in the network. And those are the results we get. So, we're actually not doing a bad job of emulating that. And that was based, that emulation was based on our understanding of how DSL works, how the framing operates and what kind of capacity limits there are and so forth. So, that was a, we implemented a constructed model of how we expected that to behave. So, here's an example of some fine-grained behavior. We saw some pictures like this before. This is the kind of behavior that you can see on network. So, we're now at a much, this is in terms of seconds of time. This is in terms of seconds of delay. That's the kind of thing that we sometimes measure. And we can again come up with some theories about what's going on there. And we can construct a model that reproduces that kind of behavior. And then we run the traffic through it and we get this. So, this is just to say that this is a system which is sufficiently flexible that it can reproduce a whole range of kinds of behaviors in networks that we actually see. So, it's realistic from that point of view. What we can also do is run two of these at the same time to the same server. So, we connect two identical clients via different network connections. And we test them against the same server at the same time. So, all the issues about what was the server load and all of that kind of stuff, we factor those out, right, because we're doing contemporaneous testing. So, we compare our application outcomes via some parameter. If necessary, we can do this subjectively. You know, find a bunch of students and sit them in front of it and say, how good was that for you? So, if the server is an issue, we show up in both clients. If the network's the issue, we'll see it in one rather than the other. So, this gives us another way to assess the impact of the network on the application performance. So, we can go further. Actually, we already saw this example. The Euclid speed test is exactly one of these. The speed tester is up in the network. We've got a whole internet between us and them. What we can do, and we've talked about this extensively, is we can measure Delta Q. So, we can measure the Delta Q, actually, between here and there. We know how much we're adding here. And so, actually, we still know, even though we've got this server O-cross the famous cloud, we can still know what the end-to-end Delta Q was, see what the application outcome is, and we can still make this assessment of what is the predictable operation for that application. We can even go further. Actually, this is all just in the software. So, we could push the whole thing up into the cloud, put it next to the application server. So, now the Delta Q we're adding is here. The one we're measuring is here, and we still know what the end-to-end Delta Q is. So, what's this useful for? For an application developer, it gives a way of taking a product out of the lab without leaving the lab. A lot of applications developed in a lab environment, and they work great. And then, you take them out in the field, and instead of being connected with Gigabit Ethernet across distances of tens of meters, they're connected by DSL or 4G or other kinds of soggy bits of string, and it may not work. And, in fact, there have been instances of quite expensive instances of money that's been spent on developing online gaming services and so forth, which all looked great until they tried rolling it out and it didn't work. You can look at tweaking your application protocols. Application developers have all kinds of fancy ways of overcoming network latency and hiding packet losses and all kinds of stuff. The question is, which one should they deploy? Which is going to work best in different conditions? We can test the destruction in the sense of finding what the limits are. We can use this to see what kind of hazards are exposed by poor performance in the network. There may be, for example, if you have security protocols and you force things to be retransmitted by the network being bad, if the security protocol is badly designed, that may expose information and reduce the security of the application. And you could even have one of these set up to do a kind of performance progression testing. So you've built the thing, everything works fine, and now you have a new release of your software. Is it still going to work fine? Well, test it in the lab first. Before you push the patch out, wouldn't that be a good idea? So other people might use this. You could have service providers who might have particular applications that they're interested in either delivering themselves or supporting for their consumer users or their enterprise customers or whatever. Test in the lab, what their network needs to do in order to keep those people happy. You could even, with appropriate permission, use this cloudification version of the test bed to do a Google on this and actually do A, B testing between different groups of users and see what people will think of it. We've heard a lot about edge computing, mech, and so forth. So you could start to explore the benefit of this from the point of view of application performance. So you could have a look at, well, okay, how does the application perform when my server is virtually located in the data center? Okay, now I'll reduce the delta Q, which is the corresponding to moving it closer to the endpoint. How much benefit does that deliver? And when we're starting to talk about, we had some discussion of virtual reality at the beginning when we're talking about AR and VR applications, this kind of thing starts to become very significant. And even regulators, not represented the room today, but some of Martin's previous seminars, we've had them along. We could start exploring. We could have a rational discussion about this whole question of net neutrality, which is subject to rather irrational discussion on most of the time. So look at the impact between different users and the effect of different kinds of policies that might be applied. Are they actually a bad thing? Okay, that was it. Any questions?