«End-to-End QoS Provision over Heterogeneous IP and non IP Broadband Wired and Wireless Network Environments A dissertation submitted in satisfaction ...»
To measure the quality variation of N encoded frames, I deﬁne the cost func˙ tion C() as the dynamic range of their distortion. Let rj and Dj (r) be the rate and the R-D function of frame j, respectively. Then, the dynamic range of distortions is deﬁned as C(r0, r1,..., rN −1 ) = maxj Dj (rj ) − minj Dj (rj ). With this cost function, constant quality rate allocation is formulated as a constrained
where RT is the bit budget. Although the constrained minimization deﬁned in
7.1 consists of three nonlinear functions, it can be solved using composite R-D analysis.
The source complexity with respect to an encoding system, such as an FGS encoder, is measured by its R-D curve. To allocate rate in a window of N frames, N R-D curves are needed. In this section, a composite R-D analysis is proposed that combines N R-D curves into one composite R-D curve.
The proposed rate allocation has a number of advantages over existing rate allocation algorithms. It is the true optimal solution, not an approximation. It is neither iterative nor recursive, so it is eﬃcient and does not need an initial guess. After the composite R-D curve is computed, it can be used for the rate allocation of any rate budget. Therefore, it is suitable for FGS coded bitstreams that need to be transmitted at many diﬀerent rates. In addition, the composite R-D curve over sliding windows can be updated eﬃciently, which further reduces the computational complexity. Experimental results using real FGS coded videos conﬁrm both the eﬀectiveness and the eﬃciency of this algorithm.
7.2.2 Pricing Strategy
Denote by Nq (t) the total number of bytes of data at time instant t in queue q belonging to packets that have fully arrived into the queue but have not yet begun transmission through the output link. Consider a packet of length l and service class S. Let tA denote the time at which this packet completely arrives at one of the router queues. Let tB denote the time instant the packet begins transmission through the output link.
I now describe the three components of the price charged to each packet in our scheme. The sum of these price components is the total charge billed to the packet. Price due to bandwidth consumed: This component of the price is a function of the length of the packet and the current demand for bandwidth on the link. At the instant that a packet begins transmission, the set of packets that have completely arrived at the router and are awaiting transmission are interpreted as the total demand for bandwidth at that instant. In most Internet router architectures, the price charged to the packet can be stamped on it only before it begins transmission. Therefore, it is ignored the demand due to packets that become available for transmission after time tB. I expect this approximation to have negligible eﬀect on the dynamics of this scheme. Let fbw (x) be the pricedemand function expressing the price per unit of bandwidth consumed when the demand for the bandwidth is x. Using the above mapping between the number of bytes awaiting transmission and the demand for bandwidth, one can now express this component of the price assigned to the packet as,
The assigned price is equal to the best-eﬀort price (or the access charge) when the demand is zero.
Price due to preferential service rendered: This component of the price is based on the preferential service received by the packet in terms of the number of other packets over which it has priority. Let fps be the price-demand function for the preferential service received by a packet when the demand for this preferential service is x. A packet in service class S gets ahead of all the packets in queues of classes below its own, since the router serves packets from the highest class ﬁrst.
The total size of data over which a packet receives priority is the number of bytes of data in the queues corresponding to service classes below it. In our pricing strategy, this quantity is interpreted as the demand for the preferential service enjoyed by the packet. Therefore, the price due to preferential service charged to a packet that arrives fully at time tA is given by, S−1 ps == lfps ( Nq (tA )) (7.4) q=1 While the above two components of the price appear similar, they are both necessary to ensure a pricing strategy that clearly resource and the demand for preferential service by the router. For example, consider a router state with a large number of packets, all of the same service class q. If I did not use a separate price component for preferential service, and if there were no packets of higher class in the queues, a new packet in a higher class could be served ahead of the large number of packets in the queue q and still be charged only approximately the same amount as the packet at the head of the queue q. This separate pricing component for the preferential service delivered also helps capture other parameters, such as delay, not captured by merely the congestion state and bandwidth consumed.
The above two components of the price assigned to a packet diﬀer in the following two ways: (1) The price assigned for bandwidth consumed depends on the state of the router at the instant that the packet begins transmission, while that for preferential service rendered depends on the router state at the instant the packet fully arrives at the router; (2) The price assigned for bandwidth consumed depends on the number of bytes of data in the queue of the packets own service class, while the price assigned for preferential service does not.
Pricing due to buﬀer resources occupied: This third component of our pricing strategy is intended to reﬂect the cost of buﬀer resources occupied by a packet and the packet losses incurred by other ﬂows due to it. The price assigned to a packet due to the buﬀer resources occupied by it depends only on the number and sizes of the packets that were denied this buﬀer space due to its occupation of the space. This is the sum of the sizes of the packets that are dropped during the time interval between the instant that a packet has arrived at the router and the instant it begins transmission. Let Dq (t1, t2 ) be the number of packets of service class q that are discarded during the time interval (t1, t2 ). The price assigned to a packet of length l and service class S due to the buﬀer resources occupied by it is, therefore, given by, S Pd == lfd ( Nq (tA, tB )) (7.5) q=1 where fd (x) is the corresponding price function with regard to buﬀer occupancy. It is quite likely that any given packet may be somewhat unfairly stamped with a high price, for example, when a large burst arrives just as the packet is about to be transmitted. However, on average, the price charged to a user will correctly reﬂect the costs of the network resources, the congestion and the preferential service received.
In a network with congestion control-dependent pricing and dynamic resource negotiation, adaptive applications with a budget constraint will adjust their service requests in response to price variations. In this section, I discuss how a set of applications assigned to a speciﬁc user adapt their sending rate and quality of service requests to the network in response to changes in service prices, so as to maximize the beneﬁt or utility to the user, subject to the constraint of the users budget. Although I focus in adaptive applications as the ones best suited to a dynamic pricing environment. Applications may choose services that provide a ﬁxed price and ﬁxed service parameters during the duration of service. Generally, the long-term average cost for a ﬁxed price service will be higher, since it uses network resources less eﬃcient. Alternatively, applications may use a service with usage-sensitive pricing, and maintain a high QoS level, paying a higher charge during congestion.
It is considered a set of user applications, required to perform an action.
The user would like to determine a set of transmission parameters (like sending rate, QoS parameters,) from which it can derive the maximum beneﬁt, subject to his budget. I assume that the user deﬁnes quantitatively, through a utility function, the perceived monetary value (e.g. 15cents/minutes) provided by the
set of transmission parameters towards completing action.
The users in the real world generally try to obtain the best possible value for the money they pay, subject to their budget and minimum requirements; in other words, users may prefer to lower quality at a lower price if they perceive this as meeting their requirements and oﬀering better value. Furthermore, this seems to be a reasonable model in a network with QoS support, where user pays for the level of QoS he receives. In our case, the value for money obtained by ˙ the user corresponds to the surplus between the utility U () with a particular set of transmission parameters, and the cost of obtaining this service. The goal of adaptation is to maximize this surplus, subject to budget and the minimum and maximum QoS requirements.
Consider the simultaneous adaptation of transmission parameters of a set of n video stram assign to a speciﬁc user. The transmission bandwidth and QoS parameters for each video are selected and adapted so as to maximize the task-wide value perceived by the user, as represented by the surplus of the total utility Utotal, over the total cost Ptotal. I can thing of the adaptation process as the allocation and dynamic re-allocation of the ﬁnite amount of resources between the applications.
Consider that routers support multiple service classes and that each router is partitioned to provide a separate link bandwidth and buﬀer space for each service, at each port. The routers are considered the producers and own the link bandwidth and buﬀer space for each output port. The ﬂows (individual ﬂows or aggregate of ﬂows) are considered consumers who consume resources. The congestion-dependent component of the service price is computed periodically, with a price computation interval r. The total demand for link bandwidth is based on the aggregate bandwidth reserved on the link for a price computation interval, and the total demand for the buﬀer space at an output port is the average buﬀer occupancy during the interval. The supply bandwidth and buﬀer space need not be equal to the installed capacity; instead, they are the targeted bandwidth and buﬀer space utilization. The congestion price will be levied once demands exceed a provider-set fraction of the available bandwidth of buﬀer space.
I now discuss the formulation of the ﬁxed price charge, and the formulation of the congestion charge.
7.3 Experimental Setup
Four YUV QCIF 4:2:0 color video sequences consisting of 300 to 2000 frames and coded at 30 frames per second are used as video sources. Each group of pictures (GOP) is structured as IBBPBBPBB. and contains 36 frames, and the maximum UDP packet size is at 1024 bytes (payload only). The scalable extension of MPEG-4 AVC encoder/decoder provided by  is used for encoding YUV sequences. The video frames are then encapsulated into RTP packets using a simple packetization scheme  (by one-frame-one-packet policy). The size of each RTP packet is maximally bounded to 1024 bytes. The generated video packets are delivered through the DiﬀServ at the form of UDP/IP protocol stack.
The 802.11b is employed for the physical layer, which provides four diﬀerent physical rates. In our simulation, the physical rates are ﬁxed to 11 Mbps for data and 2Mbps for control packets. Table 7.1 depicts the MAC Parameters for the simulations.
Additionally, the streaming node station generates background traﬃc (500 kbps) using constant bit rate (CBR) traﬃc over User Datagram Protocol (UDP).
This allows us to increase the virtual collisions at the server’s MAC layer. Furthermore, I include ﬁve wireless stations where each station generates 300 kbps
of data using CBR traﬃc in order to overload the wireless network.
A unique sequence number, the departure and arrival timestamps, and the type of payload that identify each packet. When a packet does not reach the destination, it is counted as a lost packet. Furthermore, not only the actual loss is important for the perceived video quality, but also the delay of packets/frames and the variation of the delay, usually referred to as packet/frame jitter. The packet/frame jitter can be addressed by the so called play-out buﬀers. These buﬀers have the purpose of absorbing the jitter introduced by the network delivery delays. It is obvious that a big enough play-out buﬀer can compensate any amount of jitter. There are many proposed techniques in order to develop eﬃcient and optimized play-out buﬀer, dealing with this particular trade-oﬀ. These techniques are not within the scope of the described testbed. For our experiments the play-out buﬀer is set to 1000msecs.
At the ﬁrst scenario, I examine the transmission of H.264 scalable video streams consisting of two layers. The BL is encoder at 256Kbps, while the EL is encoded at 512 Kbps. As video source is used the Foreman YUV QCIF video sequence (176x144) consisting of 400 frames. The underlying network for the ﬁrst measurement is a simple Best Eﬀort network, like Internet, without implementing any QoS model for guarantee end-to-end video quality. The video frame is sent every 33 ms for 30 fps video. Figure 7.2 shows the PSNR graph for the experimental scenario described above. The Y axis represents the P SN R value in dB while the Xaxis represents the frame number of video sequence.
Figure 7.2: Scalable video transmission over best-eﬀort networks
As one may observe from Figure 6.3, during severe network congestion caused by interference by background traﬃc, the PSNR values are between 10dB and 12dB. The average value of PSNR, Pavg is 29.038dB. Note that, a frame is counted as lost also, when it arrives later than its deﬁned playback time.
The same measurement is repeated, but instead of using a best-eﬀort network, we use a network that implements the proposed model. The transmission rate of the EL video stream, is obtained by pricing module is based on pricing modul.