 Google, as we know, has emerged far more than a mere search engine. These days, a lot of people, organizations and groups rely on Google services. Google achieves all this through data centers. In this module, we are going to look at a project that was carried out by Google for inter-data center connectivity somewhere back in 2011. So they realize this through VAN deployment using SDN. We are going to appreciate the background for it, the motivation, and then we'll look at the architecture. Google has two backbones, the northbound and southbound. The one which is internet-facing, we call it northbound, has user traffic which is pretty smooth and it's diurnal. It means it's more during the day. And the traffic actually originates externally and terminates externally. The data centers which are actually the main forte of Google for providing all these services actually has the data flows which are internal. These data flows between or within the data center servers or the virtualized environment is mostly bulky and at times bursty. All these flows are mostly related to the data center destined or originated traffic. B4 was conceived, as you can look at it, the name sounds quite awkward, before and after. So B4 was basically an initiative by Google to provide VAN connectivity to its own data centers. The requirements which were laid out by Google were quite unique. The first one was there was a massive bandwidth requirement for a modest number of sites specifically back in 2011. They had 12 sites globally. And then the traffic utilization had to be maximized while minimizing the overall bandwidth requirement. At the same time, Google wanted to have control over the edge servers for the rate limiting that is traffic engineering and measurement of the demand, which actually originates at user requests, but realizes or turns in into the data center originated or data center destined traffic. This is the before worldwide deployment back in 2011. It might have gone through some revisions, but the classical paper or the reference paper that I've quoted started off with this native design. The software defined networking architecture is based on all the unique characteristics or requirements and it results into three layers. Starting from the bottom, we have the switch hardware layer where we have commercially off the shelf SDN switches, which form the hardware layer. These switches forward the traffic and are kept from carrying out complex networking functionality. In order to oversee the traffic forwarding, we have the site controller layer that actually is based on routing and traffic engineering mechanisms and protocols, thereby forming a kind of routing or switching overlay. This site controller layer contains the network control servers, which are in turn based on the open flow controllers and network control applications. The open flow controller maintains the network state or provides the visibility and then takes the directives from network control application and the switching events, which take place and then based on the consolidated input from NCA and from the switches, instructs the switches to enable or configure the forwarding tables, which are representative of the network state. Of course, the optimized forwarding states based on the modified network state. And then we have the global layer. Global layer is a view to the SDN gateway or the traffic engineering server as a unified mechanism to provide control over the entire network through the network control applications. Visually, we can see we have the switch hardware layer, we've got the site controllers and we've got the global layer. Here at site controllers, you can see there's an open source internet operating system known as Koga. Then we have the routing agent proxy traffic engineering agent. All these are providing dictates to the open flow enabled open flow agent, which configures the switches correspondingly at respective sites. The reference that I've taken is from 2013 paper experience with a globally deployed SDN software defined van. This was published in very prestigious journal ACM.com computer communication review. You might as well have a look at it for more details.