 Welcome to the session on shared memory architecture. At the end of this session, students will be able to describe different between shared memory models based upon how the memory and peripheral resources are accessed. So here we are going to find two categories of parallel computers which are distinguished on the basis of the shared memory. So in the shared memory, again we are looking for uniform memory access, non-uniform memory access, in the cache only memory access. And the second category which we are looking that is about the distributed memory. So anyhow, first we are going to check what exactly the shared memory and the parts of the shared memory. So in the shared memory, this model in this particular model, all the processors share the same physical memory uniformly, all the processors have equal access time to all the memory boards. Then each processor may have a private cache memory, same rule is followed for peripheral devices also. So shared memory parallel computers very widely, but generally have in common the ability for all processors to access all memory as global address space. So multiple processors can operate independently but share the same memory resource. So changes in a memory location affected by one processor are visible to all other processors. So historically shared memory machines have been classified as UMA and an UMA based upon memory access time. So here when all the processors have if equal access to all the peripheral devices then system is called a symmetric multiprocessor. When only one of few processors can access all peripheral devices then the system is called a symmetric multiprocessor. So what is exactly the symmetric? All processors if have equal access to all peripheral devices and all processors if are identical about the asymmetric it is one processor that is master one executes the operating system and the other processor may be of different types may be dedicated to special task. Now I am coming to the first point of the shared memory model that is uniform memory access which is also in short we call it as UMA. The simplest multiprocessor system has a single bus to which connect at least two CPUs and a memory. When a CPU wants to access a memory location it checks if the bus is free then it sends the request to the memory interface module and waits for requested data is available on the bus. So most commonly represented today by symmetric multiprocessor machine or identical processor here is equal access and equal access time to memory. So sometimes even it called like CC UMA that is cache coherent UMA. Cache coherent means if one processor updates a location in a shared memory all other processor knows about the update. So cache coherence is accomplished at the hardware level. So I am coming to the picture regarding the shared memory of UMA. So here all CPU share the address space only single instance of the operating system is required when a process terminates or goes into wait state for whichever reason OS can look the present stable for another process to be dispatched to the ideal CPU. So on the contrary the system with the no shared memory is CPU must have its own copy of the operating system and processor can only communicate through the message passing. So in NUMA or in the NUMA kind of multiprocessor model the access time varies with the location of the memory board. Here the shared memory is physically distributed among all the processors called local memories and the collection of all local memories forms a global address space which can be accessed by all the processors often made by physical linking to or more symmetric multiprocessors. So one SMP that is symmetric multiprocessor can directly access memory of another symmetric multiprocessor. Not all processors have equal access time to all memories. Memory access across link is slow. So if cache coherence is maintained then may also be called CC NUMA cache coherent NUMA. So the system is also called distributed shared memory architecture. The system have shared a logical address space but physical memory is distributed among CPU or in a remote memory. So here the question is available. Symmetric multiprocessor involved multiprocessor computer hardware and software architecture where two or more identical processors are connected to which have full access to all input and output devices. So you have some options shared memory, local memory and global memory or both A and B. So your answer is A that is symmetric multiprocessor involved multiprocessor computers where two or more identical processors are connected to a single shared main memory have full access to all input and output devices. Now I am coming to the last point that is cache only memory access C O M A. Data have two specific permanent locations where this day and where they can be read and modified the cache and then updated at the permanent location. So data can be migrated and can be replicated in the various memory banks of the central main memory. So the cache only memory architecture increases the chances of data being available locally because the hardware transparently replicates the data and migrates it to the memory module of the node that is currently accessing it. So each memory module acts as a huge cache memory in which each block has a tag with the address and the state. So the data can be migrated or replicated in the various memory banks in the central main memory. So here you can able to check cache local to ECPU elevates the problem. Furthermore each processor can be equipped with the private memory to the stored data of computation that need not be shared by other processors. This global address space provides a user friendly programming perspective to memory. And data sharing between task is both fast and uniform due to the proximity of memory to CPU. So these are what the advantages which we can able to see in the shared memory architecture. And disadvantages here primarily disadvantage is the task of scalability between the memory and the CPU. So the scalability issue will be there then adding more CPU can geometrically increases traffic on the shared memory, CPU path and for cache coherent system geometrically increases traffic associated with cache and memory management. Then we are also moving one more disadvantage part that is programmer responsibility for synchronization constructs that ensure correct access of global memory. So overall here we are looking the different disadvantages part that also different these disadvantage part we will recover in the part of in the second one more module of the architectural model to represents that is distributor model ok. Let us check whether we can recover these disadvantage part as a advantage for in the distributor model ok. So here I have some references thank you.