 Okay, the next style, the second last we're going into is the client server style. And as with the layered architecture, this is an extremely common style in the internet, so you should all have seen this before or at least heard the terms. And what you do in the client server style is you have some kind of network, for example the internet, you have a number of clients that would like to access a service of some sorts and they all connected to the same network and then you have a number of servers that provide you this kind of functionality. Not service but server, so the server serves something to the client. And this is of course relevant when you have for example a central functionality that everyone would like to access. So for example here is some kind of database. In the coursebook there is for example here a database of images and a database of videos and then another server that sort of catalogs, categorizes everything and all the clients might access pictures or videos or just want to find certain things in there and the servers are sort of a central entity that represents that. Also because of the network we basically have a distributed access so clients as long as they're connected to the network they can access any of these functionalities. Also very useful we have the possibility to vary the load. So for example if we assume that all these servers are actually providing the same kind of data we could divide the load by three. So if they all store the same pictures and we have too many clients well then server two just takes over a part of the load from server one and that's basically distributes the load overall. So that's for example one of the reasons why the internet scales so well because you can simply add servers that provide the same functionality. There are of course a number of difficult things here and one of them is this black box here in the middle. We have something that says network and it's a practice really hard to predict how the performance will be here. So this is essentially very hard to predict how the overall load will look like or the overall performance of the system is and also if you have servers that have different capabilities and you would like to use them for the same kind of system it can get quite complicated how the different access works and how that affects the performance. So prediction of performance is hard. The other thing is imagine server one for example is the only one that provides videos. You again have a single point of failure. So if this one fails it's gone doesn't work anymore. So that's an issue and finally there is a big question mark and how do you actually manage all of this. So how many servers should you have. How do you access all of them. So how do the clients know how do the how does the system know essentially which server to connect to how do you deal with service crashing and others coming in newly. So it's not that easy to actually manage this in practice. But again an extremely successful style and while the internet shows that it's a good practice to use this for the reasons mentioned it works very well.