 Welcome to Agile Roots 2010, sponsored by Version 1, Raleigh Software, Vario, Amirisys, Agile Alliance, and Xmission Internet. Test driven design, coupling loosely, by Arlo Belsche. Okay, I'm going to go ahead and get started here. Okay, so my talk today is fairly code-centric, talking about good design. This is the first in a sequence of talks that I'm going to build up talking about some of the good design stuff. We're going to be talking, I'm going to be showing a fair amount of code over the course of this thing. I'm going to make some assumptions about the audience understanding code. I'm going to present multiple languages. And fortunately, I'm arranging these things as sequence of lightning talks. So if you don't care about a given topic, or you don't understand the language that the code is written, wait three minutes, the weather will change. So I'm going to go ahead and get started. First is just sort of framing discussion. Good design, this dates back forever. Good design consists of loose coupling and tight cohesion. So cohesion means that a given piece of code does one conceptual thing. So if you've got good cohesion, then everything related to addresses in your system is sitting in the address area, and it's tightly cohesive there, and that's displaying to screen or checking whether it's a valid address in the post office or that whole logic is all together. Good cohesion. Good coupling, loose coupling is the goal there, means that and those entities are loosely tied to each other. So when I'm working with addresses, I don't have anything, the association from that to contact information or to a human is very loosely coupled. I can work purely with the address and I don't have to think about humans or contact information and vice versa. So that's the goal. Now this is known in XP as loose coupling is unit test driven development. Not end to end, not multifunctional, but using unit test driven development focusing on the units. That forces loose coupling. And tight cohesion is known as metaphor. So I'll be talking here about loose coupling, and if you're interested in metaphor, next year. So coupling is all about indirection under the second topic. Indirections, there are a bunch of different ways to do indirections, there's some famous quotes, you read there, every problem in computer science can be solved by adding a level of indirection. And raises some questions. When we have a problem, we are going to most often solve it with indirection, especially if it's coupling related. But we need to know what are we going to indirect on? Is the coupling at the type system level? We've got types that are tightly coupled. Do we have algorithmic coupling where a total algorithm can only be represented as a single chunk, we can't break out the elements of it. Do we have data level coupling? Do we have network level coupling? Do we have all sorts of things that could be coupled together? Similarly how much indirection do we need? You can have two things that are extremely tightly coupled, like a class collaborating with another class by directly calling a method and getting a result back, that's a very tight coupling relationship. You can slightly indirect that, maybe put an interface there so you could substitute it in another class. I can further reduce the coupling, instead of having a class that I call a method on, pass in a function and just call that function, and now that function could be a free function, it could be a method to a class that's slightly more loosely coupled. I can go further than that. I could have an event-based interface where I just publish to a bunch of subscribers and I don't even know whether there's anyone listening. So there's many different levels of coupling all the way up to double blind, you know, no one's there. Problem with loose coupling is it makes code hard to read in a literate style. You can't simply start at the top of any method and read it. You can read in a breadth-first style, but depth-first traversals don't work so well anymore. So the problem that can't be solved with too much indirection is, with another level of indirection is if you've already got too much indirection. So when we're looking at these things, that's the balance in the trade-off that we're going to make is we want to put in enough loose coupling to allow us to get the job done that we need to do, but we want to keep things tightly coupled enough that we can trace back and forth. So for example, when we're talking about addresses and users, I want the address and contact information blob. I want them decoupled, but I want them horrendously decoupled. The contact information should probably know that it's speaking to addresses. The addresses probably shouldn't know that it's included in contact information. I should be able to, from the contact information, follow methods into the address. Whereas when I'm talking about users and roles, logins and roles, I actually want those usually fairly decoupled. There are very few places in the system where I want to, given a user, identify what roles there they have. Instead, most often, what I want to do, what I want to ask is, does this authenticated user have permission to take this action? I don't need to know that there are roles involved. I want that to be completely invisible, so I want to decouple that, probably pretty significantly. So now into the rest of the talks. I'm going around, jumping around this list, I have a big giant catalog that I'm putting together of all sorts of different types of decoupling. One of those right here is mock objects. We're going to talk about that one first, because that's one that people reach for first. And I've got these grouped by what you're indirecting on. And then some notes about, and these are still sort of rough notes. How much does it decouple? How much does it impair legibility? So these are the sort of the two axes that you want to keep in mind when you're doing trade-offs and deciding what to use. My goal for the rest of the talk is mostly to present a bunch of different ways, come up with a couple of summary things, and give you information that you can use when you get to a coding situation to figure out what technique to apply where. So I'm going to start with mocks. So I've got some code. This mocks are extremely useful when you've got a legacy code system. Code was written without any idea of good design. It is tightly coupled. It also works. The most important feature of the one everyone forgets about legacy code is, it works. It's delivering value right now. So what we want to do is we want to decouple something for the purpose of testing. We want to enable testing. But we don't actually want to change the code at all. And that's where mocks are really useful. There are actually two types of mocks. The ones I'm talking about here are code-level, object-level mocks. There's another type which is mocking out an entire subsystem. So I could make a fake persistence layer that, instead of going to a database, does everything in memory. I can make a fake user that replaces the UI with a script, etc. So here I'm talking about the object-level ones. So this is an example where basically you create a mockery, you mock out, in this case I'm making a fake network and a fake file so that my download function, I can test it and I say how a file should respond to various things, what I expect to have called on it, what response it should give, how a network similarly. Call my function and there's a function on the mockery that says basically everything happened according to my script. So common style of mock object testing. And these yellow lines I highlight sort of where the key aspects of the things you're going. So that's the assertion that's actually happening. The purpose of this test is to check that dispose happened. That's one of the problems with mock object testing, is because you're laying out the script it can often be difficult to check, to identify what is the line that's the actual assertion this test cares about. So the other problem that's more and more significant to me, especially when I'm talking about this conversation, the purpose of mocks is to allow you to decouple a design without changing the code under test. In other words, without decoupling the design. So it's really, really good to get something under control. But if you use it as a practice in perpetuity and it's the only style of decoupling you use, then you will end up with a tightly coupled, poorly designed system that has tests around it. It's still a legacy system. It's still hard to change. So use this, use it frequently, and then stop using it. Put tests in that use mocks, then refactor the code under test to use a different style of decoupling and get rid of the mocks. So you can see it has no impact on decoupling, but it leaves the legibility of the code unchanged. That's its great value. So I'm going to start with algorithms. So many of the places that people are using mocks, you got to set a code like that download function. And you want to check that it varies with collaborators and all sorts of things. So here's a particular example. Switch statements, big, ugly, horrible things to test. I've got a method that is based strongly on some that I have seen in my current company. It's in a different language than the stuff I've seen in my current company, but other than that it's about the same. So this function, like many, consists of a big switch statement, a single conditional, and then another big ass switch statement. And it's pretty hard to tell what the heck that thing is doing compared to this function. It's pretty easy to see what that one's doing. So all that I've done here is I've switched out the switch statements for strategies. So strategy pattern straight out of gang of four. And basically I'm making one instance of the strategy for every branch of the switch statement. And I've changed from doing a switch to doing a virtual method call to a family of methods. That virtual method call has exactly the same semantics as a switch, but looks differently, extends out differently, decouples things differently. And is more difficult to read in some ways. So that makes this code very easy to read. But when this is debugged, tell me what code is going to be called. The switch statement was very easy. With this one you've lost the ability to trivially statically analyze your code. So you could get a single blind dispatch here. The caller end of the strategy doesn't know what strategy is going to be called, but the strategy typically knows where it was called from, where it's intended to be used from. And it impairs static analysis, but it does give you some decoupling. So another common problem. Often a lot of the code that you're trying to decouple is the code for handling, for doing an operation from the code for handling errors in that operation. OK. There are a couple of solutions to that. One is to decouple the various bits of code. The other is to eliminate the error case. This is one that option is all too often overlooked. So a convenient way to do that that comes from functional programming and shows up a lot in jQuery, which is why this example is in JavaScript, is to use chainable set-based functions. So these are functions that take a set as their first argument and return a set. And they don't know or care whether that set is empty or not. So instead of having an iteration, which then has to handle the loop case, the error case of what happens when this doesn't apply to a particular element of the iteration, you just pass around to the set of the things that it applies to. And as you go through the sequence, that set of things that it applies to may change. So for example, the first call of this is pretty simple to understand. The second one is actually much more complicated. This is finding all of the avatar objects on my page. And then it's filtering that set to only those that are players and doing some operation on the players. Then removing that filter, filtering that set to only those that are monsters in this, I don't know, made up game, and doing something else to those monsters. Removing that filter, going back to the entire set and initializing all of them, saying, go to the game. And once you've programmed this for a little while, it actually ends up being reasonably readable. There's no error cases anywhere here. Even though I initially selected a set of both players, monsters, and maybe some other things, and I cannot bind a player controller to a monster, I don't have to worry about that condition. I can just eliminate it from the code. So the key idea here is use sets frequently. You only use a scalar-valued variable when you know you have exactly one item. If you have possibly zero or one items, that's a set. You might add a constraint to it that ensures you never have more than one. But don't use a scalar. When you do that, it eliminates a lot of nulls from your system. If you don't have nulls, you don't need to check for nulls. And there goes a whole bunch of code that you don't need to worry about. You don't need to test. And you don't need to decouple. And also, scalar variables are often difficult to change because usually a function will take a scalar and return a scalar of a different type. It'll transform. Whereas sets can be much simpler to chain. I've just got a set of things. I pass it along. So query over objects. This is what Link is based on, also SQL Alchemy. So I'm popping over to data access here temporarily. The reason I'm talking about this now is query over objects is based on functions that take sets and return sets. There's a set of transforms there. And it pre-defined functions like select. Given a set of foo and a lambda function that does a transformation for an element, give back a set of whatever that those transformed elements are, where, filter a set of things, and so on. The nice thing about query over objects is that these functions are designed to work on sets of things. And it doesn't care what that set of things is or where it's stored. So you have a vector of a bunch of them in memory. You can filter it. You have a database table. You can filter it. And the systems are smart enough that if you have a database table and you filter it, what it actually does is it dynamically creates a set of SQL, executes that against the database, and only loads into memory those which pass the filter. The nice thing about that is I can have a function like this, which checks permission. I define it. I write my unit test by putting stuff in memory. That memory repository is basically a hash table. I test it in memory. My actual code hooks that up to a database. And it all works. Because the framework is guaranteed that where will behave the same, whether it is done in memory or against a database, and that select will behave the same, whether it's done in memory or against a database. So the fact that it's being implemented in SQL in one case, and in the other case it's being implemented as loop, is completely hidden from me. I have decoupled whether I am persisted from the system entirely, and I can test those independently. So I'm going to pop back a little to more algorithmic ones. Functions. So this is another good way to decouple. As I mentioned earlier, an object instance, and calling a method on an object is a fairly tight coupling. When my contact information knows it's doing something to an address, it's actually got multiple levels of binding. It's got a binding on the object, and it's got a binding on the method on that object. I can decouple at one level by passing in a function. So it doesn't know that it's talking to an address object, it just knows it has this function. That's an oracle. Give it some data, give back some data. The next level to do on that is to add currying. So currying comes from functional programming. It means it's a way of transforming a function into another function. So I've got a function. It takes three arguments. Or in this case, on my model, I've got a function that talks about how to create an authentication object from a username and password. I've got another model on my authentication that talks about how to create an authentication object that represents a trusted connection, database trusted connection that's going to use whatever the currently logged on Windows user is. They have different signatures. One takes two strings, a username and password. The other takes no arguments. What I'd really like to do is not care which one I'm calling at the time. So this is an example where what I'm writing is a dialog box that is a lot like the one in Visual Studio for when you're going to generate a bunch of code. And it needs to connect to the database. And it asks you for information. How do I connect to a SQL server database? So there's a radio button that says use the Windows Trusted Connection or use username and password. And it enables or disables a couple of fields and all sorts of magic. And you hit OK. So in that logic of what happens when I hit the test button, test connection button, or hit the OK button, it's going to build a connection string, that system doesn't care and shouldn't care which style of authentication I'm using to the database. It should be able to just make me the authentication that matches what the user has selected function and get back to authentication. So what I'm doing here is I'm using currying in C-sharp implemented with lambda functions. And I'm saying that I'm going to take Trusted Connection as a function that takes no arguments and returns back in authentication. I'm going to make a password connection look the same way. What I'm going to do is I'm going to say in this lambda function, whenever you call it, it takes no args. And what it does is it calls password connection. And at that time, looks up what values are in those fields, those now non-disabled dialogue box fields, and passes those in. And then I'm going to bind those objects down here, or I'm going to bind those objects to the radio button so that what I can then do is when my, and then I have another function further on that will create connection string from calling that and so on. That means that when someone clicks that OK button, I don't know what code is called. And I'm fully de-coupled. I don't actually care what code is called. At the moment they hit OK, it looks up in the radio button and says, just what function is bound to the currently selected item? Call that. And if that happens to be the one that's building a password based one, it looks up in the dialogue box, the correct values, and it generates it. If not, it makes a trusted connection and goes on with life. So now my OK button and my test button are completely de-coupled from what's currently selected in the radio box. I can test those totally separately. And I've split off the UI without having to mock out a UI at all. So just bind to it differently. Closures are another thing that come out of the functional programming. And a lot of what I'm doing here in this section is talking about ways to use functional programming to replace OO. The reason is that a lot of the OO constructs are more tightly coupled than a lot of the functional program constructs. Now, I still write mostly OO systems. My wife writes far more functional programming systems than I do. So I don't use these everywhere, but they're great around the boundaries. So closures, a closure is fairly simple. It means that I'm going to define a function not just as a global function the way that we do an OO where a class has a function, but I'm actually going to define a function within a context. So in this case, I've got a function that takes some arguments and returns a function. When it's called, it defines a new function. That function that it's defining internally is a closure. So it actually contains, in its definition, the new one that's created, not only the functions and arguments that it takes, but it knows all of the variables and values that were active at the time it was defined. And it basically gloms on and grabs those. You can think of this as it has created a little mini object with one method named call, and with a bunch of fields that contain the values that were called when it was initially defined. So in this case, doing a little bit of Python meta-programming here, where I want to write a decorator that I can just mark any function and say, this is a cached computation. The first time you do it, the first time it's called, you execute this code here. And then you store it away, memento. And every subsequent call just returned the pre-computed value. And so I'm defining cached computation, and it takes a function as an argument, stores the function that will be done, and returns back another function, which will decide whether to call that function or whether to use value out of the dictionary and do things appropriately. And then I just associate that with my code. There are a lot of levels of indirection on this particular piece of code. And if you've done a fair amount of Python decorator stuff, you'll understand it. And if you haven't, you probably won't, because it's fairly Python-specific. But the key idea here is that this function variable that I'm passing in here gets stored away, and I can use it later in my computation. Even though when I come down here and I actually make the calls, that function variable is gone. So when I'm calling add-badly, I'm not passing in the function add-badly. Rather, it has already stored that for me later. This also shows up very often in JavaScript. Use it all the freaking time in JavaScript, where you will define a little function to be passed into map. And you will take advantage of whatever variables happen to be in the scope of the time, like the set of HTML tags that you're going to operate over. So this, again, makes it very simple to decouple an algorithm from the data that it operates on. Because I can have the algorithm be defined in a context which I can change. So in my tests, I can provide one piece of context by calling it, and it will be defined one way. And in my real code, I can provide another context by calling it another way. I know it will operate and interact with that context in the same way. See what I've got here. So overview on algorithmic stuff. Algorithmic decoupling is really where functional programming is awesome. The closer you are towards algorithms, the further you are from data, the closer you should be towards plain old functions and the further away from objects, and vice versa. So with all of these things, it does make it more difficult to do static analysis. When you get to pure functional programming languages like Haskell, static analysis is extremely difficult. There is a massive type inference engine that does truly amazing magic, and is able to figure out what methods to call win. But mere humans are pretty much incapable of identifying what method is going to be called win in a Haskell program. If you're in the middle ground, you can still do a fair amount of the static analysis that you know and love, but you'll reach the boundaries when you hit a place that I wanted to decouple where, no, you don't know what the static type is that's going across this boundary. That's sort of the point. So good when you want to decouple can make debug and test much more difficult. One more final thing I want to talk about on data access. I'm only going to cover two things here. So we talked about the query over objects. Query over object is good. Helps you decouple the database from the rest of the system, but it's only half the problem. Because in a typical orm, the class that I'm going to persist, in this case user, has some knowledge about the persistence framework. It might inherit from it. It might have constrained construction requirements so that the system can actually put a proxy that inherits from this class, and every time you construct it, it makes this proxy. It might have save methods on it that you need to call. It might have who knows what it is, but it's got some knowledge of the framework. There are a couple of frameworks where that's not the case. Right now, SQL Alchemy and Linked to Entity Framework are the two main ones of which I am aware. I think there are some in Ruby, too, but I'm not aware of them because I don't spend enough time there. So the advantage of this is I can have test code that operates on users, and they're just normal users. So I can test the password hashing functionality of the user without a database connection. I shouldn't need a database connection to test the password hashing capability. Likewise, any of the other methods that are on users, they're just normal user methods. They behave like one would expect. However, I can then call my bind to database function, and now the class user behaves entirely differently. Instances are automatically stored in the database. You instantiate one, and they're put into the current active unit of work, and they will be stored away, and you load them out, and all of that sort of stuff. That allows me to separate the tests that are over database functionality and have those ones go against the database or in memory representation of the database, if I'm using the other pattern, from the tests that really don't. And they can just operate with normal users. So yeah, pretty much did that. Data access layer accounts for a tremendous number of the mocks that I see out there, where I'm talking about the mocks that actually stay over time. There are many of the times where people just have normal objects, two normal objects, they'll start with a mock, and then eventually they'll refactor, and they'll get rid of that. Sometimes they keep them, whatever. But when it gets to the data access layer, people often don't have a better approach. So that's why these couple of patterns are really, really key. It eliminates all the dependency in the database. In fact, if you're using Entity Framework 4 and the Poco Object Futures for it, you can have the assembly that does all of your work, have no dependency, no reference even to the Entity Framework, and then have only your final deployed website have a reference to Entity Framework and worry about persistence, which means all of your tests are completely unaware that a database might even at some point be connected to this. Some different assembly. So fortunately decoupling is fairly easy as long as you have a library or some other patterns to help you. The other aspect of this is database transactions are too low level. I'm not going to go into much in the replacement for that. But typical work, a website, for example, you have a particular request that's going to come in. It's actually going to perform a sequence of operations that may need to be in multiple transactions. And yet you still want all of that lump to have transactional level stuff at the business rule. If I get a request in and I make a note that the user performed this request, I put that audit in there. And then I get on and I later find that this request was invalid. Then depending on the semantics of my operation, I may want to revert back everything, including that note, and say, you know, just pretend that the user never made this request. We're going to give them another operation to clean it up, or an option to clean it up. For that unit of work is a business level transactional pattern. Highly recommend it. If you still have coupling at the database layer, you can fix that. It's a solved problem these days. Go look online for some solutions. The other nice thing about that is if you use all of these things together, and you are in a compiled language, C-sharp especially, then all the queries that you make can be type checked by the compiler, which means that when you make a schema change, you do a migration. Your compiler tells you every piece of the code that you need to update to modify that schema. They all just become compiler bugs and you fix them. Next I'm going to go to the web, partially because the web is where I play today. And partially because the web is a place that people are thinking about, I won't say wrong, but in any complete fashion. So first thing I'm going to talk about here are restful requests. Figure HTTP is a programming language. So this is the code for a sequence of requests from client to the server. I was trying to figure out how to format this because no syntax highlighter out there knows how to format HTTP. People don't think of it as a programming language yet. But it is. And that's the purpose of restful URIs, is you define a resource with a name. In this case, I'm talking about the resource, this is a banking application, an account transfer transaction, for example. And that resource has a set of operations. HTTP gives you your choice of eight operations, crud, and four meta operations, head and options, and stuff. So from that, you can build an entire system that is a data oriented view of the web. So when you're using restful requests, what you're doing is it's a model view controller design where the model is the server, the view is the web browser and the client, and the controller is HTTP. Set of URLs and responses. So this is just a long sequence where I'm creating a transaction, I'm putting some values into it, I'm deleting some values, I'm modifying, blah, blah, blah, and saving. Just wanted to get an example up there. But you'll note that each of these URLs refers to a particular resource. So I'm going to, in this case, post to this ongoing transaction that I got. I know I'm operating with transaction 234, the server told me so earlier in the sequence. So I'm going to post transaction 234. Please add to it a deduction, or I guess in this case a payment to that account. So I refer to the account by its URL and then the amount using some JSON. And eventually I post this as a complete transaction and I can get the list of current actions. So restful resources make writing websites a lot easier because that allows you to do an MVC at a very critical point, which is that you can now treat your entire browser client experience as a view and have it decoupled from the server. The server is just a model. Unfortunately, there's still a little muddying in the water in that browsers are really dumb terminals. And you have to tell them how to do UI. So the server actually does two things. One, it serves up the commands to the browser on how to create the view. And the other, independent thing that it does is it provides data for that view. You can split those out. And then your data can become very, very simple to handle. So now I'm going to go over to the web server. On the web server, MVVM is a pattern that comes from Microsoft. They're pushing it strongly with Silverlight and all sorts of things, whatever. But it's just a pattern. And it's a really useful one. It's actually also in Django. It's in a couple of other Python web frameworks. I think there's probably a Rails adaptation that uses it. But again, I'm not too familiar with that. It shows up everywhere. But the basic pattern is, instead of thinking of model view controller, where a request comes up to the server and it goes to the controller, the controller then grabs some stuff in the model, presents that to a template. The template makes some requests and goes back to the model and accessing properties and the like. Split that out to a request lifecycle is request goes to server, activates stuff on the model. That's model. The stuff on the model, the model's response is to create basically structs that just hold the result data. And passes that back out. The request also determines, through mappings that you set up, what view is going to be handling that. So that's the controller. It's the URL is the controller. So that gets passed off to the view, just passes this plain old struct, and that's view. And that plain old struct is the view model. And so then the view is just template, it's doing template generation from this view model. That means that now I have a complete separation between the model and the business logic. You put all business logic in there. And the view, which just does display, and there's nothing else in the system. Controller is gone. It's just HTTP. So this is an example view model. In Microsoft's implementation, a view model is not just a struct, it's a struct with attributes that tell how to do validation, so that it can do client and server side to input validation and all sorts of magic for you. But that's not as relevant. The key thing here is that I can now test the template without there being a network involved. My templating is all done on the basis of these view models. I can just pass a view model into the template and make sure it comes back the way I want it. I can write unit tests for all my templates. I can similarly test the model, because all the model does is exposes some functions that return back structs. That's easier unit tests, so I test all of that. There's no code left. I completely decoupled those two. So the next step up from that, self-describing resources. So those view models are sort of cool. They are basically resources. They hang out at a particular URL. They have a bunch of data in them, and that's about all they've got. But I want to be able to display them in a bunch of different ways. One is I want to be able to pass it to this template and generate some HTML to handle my client and all sorts of cool. I also want to expose it through Gison. I also want to expose it through AtomPub. I also want to expose it in a bunch of different ways. Also, depending on where it's displayed in my site, I might want to display it as a full implementation. I forgot a user. So I want to show the user's name and their birth date and all sorts of things when I'm going on the user page. It wants a full display. I want only a summary on other pages where I have a list of users, and there are other places where I just want to link to the user. So I just have the name in a tag. So what you do is you slightly enhance those resources that they're self-describing. So that when I display it on a page, the page says to the resource, display yourself as a summary. And the view model then says, OK, for a summary, I use this template, pass it back, and displays itself accordingly. So in that way, then the resource itself, this view model resource, knows how it is displayed in all the various ways. And the template knows which style of display it wants for whatever it's going to display. Those two can separate further. And that then allows me to have a lot of my views, my templates, not even know what they're displaying. I display a list of things. I can make a widget that just displays a list of things. And all it says is show me the summary for each thing. And I'll just make a list of them. It allows me to further reuse in my templates and test those more easily. And then finally, I'm going to get into OData. I'll do this last because it depends on most of the ones that I just presented. It's also Microsoft specific right now-ish. It's a standard. It is available in, it is used by WebSphere and a few other non-Microsoft technologies. But it's mostly Microsoft right now. And it's just a fairly straightforward protocol for specification of styles of RESTful URLs and specification for the semantic layer of what JSON and AtomPub responses come back from that. The result is that if I'm using OData, the client can determine the query that I want to give. So for example, in this one, I'm doing a request, given person three, some person that I'm looking at, join to relatives, join to their employers, and then filter that where the employer start date is greater than January 1, 2001, offset by 20 and limit to the 10. It goes off the right side. And then it also says, and return that in JSON. The client has just specified SQL. The server chose what models to expose. So the server chooses the security, and it sets up what the models are. The client determines what it wants to do with those models. That allows me to have my browser really behave as a view, because now I can have the browser on the client just decide right now I need this sort of data, put it in here. The server just exposes data for the disconnecting those. So the new way to think of the web is the web is a distributed remote data abstraction layer. It's not a distributed application. Some of the data it serves are instructions for making applications. But there's a lot of other data it serves. If you build your system with that assumption, then you can separate out the model, all your data in your business rules, and represent that very, very cleanly and simply on the server, you can test it all, from the application in the UI and the like. That also then allows you to substitute out the application in the UI. You can have a Silverlight version of it, and a HTML and JavaScript version of it, and a Flash version of it. And they all hit the same model, and the server represents the model. So that is a quick set over a number of the indirections that I wanted to present about. I'm going to open up for questions. And that also includes if there's anything on this list that I didn't talk about that people want to hear more about, I'm happy to discuss it. Then thank you very much. And in closing, hopefully you can find some ways to reduce the coupling and the designs as you go forward.