 Virtualization is a very fascinating concept. It is being used in a variety of ways these days. With regards to the next generation networks, we need to understand it in a certain context. That is, if we want to achieve maximal utilization of multi-vendor equipment in a very vendor-neutral and agnostic manner to run commercial applications efficiently, we need to look at it. So this is what we are going to discuss. And then we'll look at the practical implementation of virtualization in network environments. Vendor neutrality is a desirable goal. That is, network operators have consistently sought to have vendor-agnostic solutions. That is, minimum dependence upon the vendor as a sole supplier of the equipment. The vendors are known for their lock-in effect. That is, once you fall victim to the dictates of a vendor, you have no option but to comply. If vendor neutrality or vendor agnosticism is achieved, then the operator is at will to change the supplier, upgrade the solution and evolve the services and architectures way beyond what the vendor commits. Virtualization is very important and useful, similar to so many other fields of computing, with regards to NGN, because eventually, all networks are moving to become NGNs. So far, the architecture of NGNs is such that if there is change required, nothing much could be done at the core or at the internal side of the network, but only at the periphery, the end-to-end, edges could be altered. So it means, if we want to achieve the vendor neutrality in NGN environment, can we identify or discover some interesting data from our networks? For instance, the types of nodes. Can we identify some heterogeneity in there? The peculiarities of information which is being exchanged between a certain node pair, node-specific, session-specific or subnetwork or region-specific power and computing and bandwidth utilization effects on the network. Can we get some insight into these? And if a certain network is offering multiple services, is each service getting the expected service level agreement from the underlying network? If you are able to get the answers for these questions, we are ready to move to virtualization and help achieve independence. Well, if you look at the answers, we'll see that there is so much of diversity and heterogeneity in the network that treating networks as homogenous beings is going to be a mistake. So it means that at the initialization time, if it is thought that a certain network architecture with the best hand-picked network equipment is going to be the one-size-fits-all solution, well, it's not going to happen because the evolution or the temporal behavioral change in the network performance user behavior equipment performance varies. So we say that single solution it cannot be thought of that meets all the requirements at a certain point in time and it cannot be fixated. So can we make it loose? Can we decouple? Well, the process of decoupling is possible if you are able to configure, tailor and manipulate the network resources including the computing power, bandwidth, storage and the battery, etc. Network virtualization similar to its sister concept in computing is when multiple networks are designed by incorporating virtual components which exist in physical networks. For instance, we could create a virtual router that borrows its resources from either a single physical router or multiple routers which may not even be physically co-located then we could have a server which is not a physical machine but a migratable entity. It means that virtualization helps us to achieve a variety of goals that we define which we call as the target services for a certain suite of customers. Once we start thinking of network virtualization LIMP, L-I-N-P or logically isolated network partitions is the most intuitive and explainable concept of having multiple virtual networks out of a single or multiple physical networks. So a LIMP is basically a logical network virtual network that is extracted out of a single physical network or more. The virtual resources are created on physical objects as we had shortly talked about. These resources are manipulated or utilized through their programmability. It means their provision an application programming interface through which their resources could be accessed. In the best possible case LIMP can emulate the performance for offering certain services similar to its real or physical counterpart. It's a wish list that needs to be incorporated if virtualization of network has to be achieved in full letter in spirit. So the design goals for LIMP are the isolation the logical isolation should be such that no computing leakage or communication leakage should take place beyond a certain LIMP. Network abstraction implies that the underlying complexity of how many physical networks or network elements contribute to the orchestration or virtualization of certain LIMPs is utilized. If that thing is kept translucent we can say that network abstraction has been achieved. Reconfigurability is the ability of the network to become cognizant or aware of the underlying dynamics that the network goes through. This is important because for a committed quality of service the quality of network varies over time. Consequently the quality of experience changes so to ensure an expected quality of experience for a certain premium user reconfigurability is desired. The performance KPIs that is key performance indicators of virtualized networks should be the same to their physical counterparts. We have already discussed programmability having a unified view on management with regards to routers, switches, servers and gateways, firewalls, access control lists, IPS, IDS, having a unified view would make the management of these network elements easier. Then there is a need to incorporate mobility that is in mobile environments a physical entity is actually moving in distance with respect to certain other fixed entities. How that could be perceived in virtualized environment is an important area of investigation. A lot of work has already been done but since we are discussing the design goals of LIMPs which have already been materialized we should know that these were hard research areas once but now most of these have been resolved but still improvement is sought. Last but not the least if we want to emulate the development out of a single wireless device such as a wireless access point let's say we want to create multiple LIMPs out of a single wireless AP then the performance of these LIMPs is going to be dependent upon the channel impairments and channel behavior which is a temporal process so this thing has to be kept in mind as well. Let's now look at how the physical topology as in physical resources are translated into virtual resources which in turn are converted into virtual networks which give certain services so we have the physical networks we've got three physical networks and we've got their respective network management entities out of these routers some of these routers participate in the virtualization activity by offering the resources of these routers we need to have a resource manager that is a server or a computing resource that provides resource management at a more virtual level then out of these virtual resources virtual networks are extracted here we've got three LIMPs with a manager for each and then these three LIMPs are offering their respective services so we conclude that making efficient and optimal utilization of the commercial equipment while ensuring that there is minimum dependency on a certain vendor equipment and a variety of vendors and a variety of equipment is available how can we make maximum use of this is what we've seen in virtualization