 Let's look at now some protocols that provide this quality of service in the engine architecture. The real-time protocol being at the forefront. It's not essentially a protocol, it's a family of protocols that includes a real-time protocol itself, real-time control protocol and real-time streaming protocol. So first and foremost any voice service would require certain signaling support. This signaling support actually has to be done through the network. In our case in NGN, the signaling is gathered out by SIP and the and the diameter. SIP is included in the IP multimedia subsystem, the IMS, which is part of the engine architecture. The data transfer is carried out through RTP. RTP as an alternative to UDP and TCP is because voice needs some special treatment and voice needs some additional signaling. That signaling is not exactly related to the establishment of connection but it is between the two end-to-end users or the two parties. So all the coding related functionalities including encoding, decoding, etc. are carried out by the RTP family. So RTP essentially provides the end-to-end transport functionality and since it is an end-to-end phenomenon it is available at the end host. The primary responsibility of RTP is to provide QS related guarantees. That means what are the requirements or the service level agreement between the service provider and the end user in terms of QS and how those have to be achieved. So it means some kind of provisioning, feedback monitoring and tweaking that is modification is something that is going to be part of the RTP. So QS guarantees and reservations are going to be part of functionality. RTP is actually the real time protocol that establishes connection for the data that is the voice in this case but in order to look at the statistics there is another protocol real-time control protocol. Again it works between the two endpoints. It keeps track of the statistics which relate to the transmission, which relate to the quality of service and in case multiple streams have to be synchronized or overlaid on top of each other then the real-time control protocol manages those as well. This is the RTP packet format. It is not a very big concern to look at the RTP packet format as the final one because the different applications may even go for some slight modification to the RTP packet itself. Let's look at the fields like we have the version, we have the CSRC count, we have certain flags like PXM, we have the type of the payload etc. Then we have the sequence number and time stamping and the sources synchronization source and the contributing source. Let's look at the fields now. So the version is obviously what defines which particular version is being used in in these days we have the second version which is being used. Then if we are using padding which compensates for the incomplete packet that means that it is not part of the payload it has to be notified from the sender end to the receiver end that this particular part of the payload is not going to be considered as voice. Then if there's a requirement to extend the header then some kind of extension flag has to be enabled. If the extension flag is up it means that there is going to be an additional header exactly one in number between the RTP header and the payload. Then we have the contributing sources. It is all these senders whose audio or audio related information is mixed together. So up to 15 contributors can create the mixed audio and the number of active users or active sources has to be identified. Then we have the marker. Marker actually contains an indication if this particular flag is up it means that the packet in which this flag is found to be up is the is the boundary or is the last frame which is going to be part of the audio conversation. Then we have the payload type. The payload type is specifically dependent upon the encoding scheme and the application which is eventually going to use it. Then sequence number actually identifies the ordering the sequencing and the time stamping related information. The sequence numbers actually identify the packets between RTP sender and receiver. Then we have the time stamp. Actually this is the information that the sender has to send to the receiver because each part of the audio conversation has to be placed in a certain time sequence. So this particular field actually helps the receiver to play back the content at an appropriate time. Now this particular time is a relative concept because the sender and receiver may have clocks which are which have different times. It means now the the rate at which the timestamps are stamped in at the receiver side and played back on the on the center side and played back on the receiver side have to be derived from the clock. Now the clock is provided by the application. It means the application has to make sure that any physical clock time differential between the sender and receiver is compensated to the offset that the application has to take care of. But essentially the clocking information has to be such that it linearly increases and it is a non-decreasing function. So it means the time is going to be an increasing phenomenon.