For each rate and sequence, the reconstructed sequence of each encoder was presented simultaneously to the subjects. The ordering of the three sequences varies for each HRC, so that the subjects had no knowledge about the encoder order.
The subjects ranked the sequences by perceptual quality if no differences were detected between pairs of sequences; they also annotated this fact. After analyzing the users scores and removing outliers, the test confirms that the ranking order of the metrics was the same as the subjective ranking.
In the cases where viewers scored no perceptual difference between sequences, the metrics gave always values lower than 2. In this test, for slightly higher differences, for example, 3. In order to determine how much difference expressed in the DMOSp scale is perceptually detectable, deeper studies and subjective tests must be done. From our studies, we detect that the perceptual meaning of the difference depends on the point in the DMOSp scale where we are working.
For example, for high quality as stated before in previous tests , DMOSp value differences up to 4. Finally, Table 4 shows, for different frame sizes, the mean frame evaluation time and the evaluation time for the whole sequence needed by each metric to assess its raw quality value. On the other hand, RRIQA and VIF are the slowest metrics they run a linear multiscale, multiorientation image decomposition , although in our tests the VIF is the most accurate metric among the general purpose metrics.
Our objective in this section is to analyze the behavior of the candidate metrics in the presence of packet losses under different MANET scenarios. In order to model the packet losses in these error prone scenarios, we use a three-state hidden Markov model HMM and the methodology presented in [ 68 ]. HMMs are well known for their effectiveness in modeling bursty behavior, relatively easy configuration, quick execution times, and general applicability.
So, we consider that they fit our purpose of accelerating the evaluation process of QAM for video delivery applications on MANET scenarios, while offering similar results to the ones obtained by means of simulation or real-life testbeds. Basically, by the use of the HMM, we define a packet loss model for MANET that accurately reproduces the packet losses occurring during a video delivery session. The routing protocol used is DSR. Every node is equipped with an IEEE The foreground traffic is composed by real traces of an H.
The video source is mapped to the video MAC acess category. We describe two environments: a congestion related environment and b mobility related environment. The congestion environment is composed of 6 scenarios with increasing level of congestion, from 1 to 6 video sources.
For each of these scenarios, we get different packet loss patterns provided by the HMM that represents each scenario. Isolated small bursts represent less than 7 consecutive lost packets. As each frame is split in 7 packets at source, isolated bursts will affect 1 or 2 frames, but none of them will be completely lost.
This error pattern is mainly due to network congestion scenarios, where some packets are discarded due to transitory high occupancy in the wireless channel or buffers at relaying nodes. Large packet loss bursts. Large Bursts cause the loss of one or more consecutive frames. Large packet error bursts are typically a consequence of high mobility scenarios, where the route to the destination node is lost and a new route discovery process should be started.
This will keep the network link in down state during several seconds, losing a large number of consecutive packets. We have used the H. We have used the Foreman CIF seq. After running the encoder for each extended video sequence, we get RTP packet streams. This process simulates packet losses in the MANET scenarios, so a distorted bitstream will be delivered to the decoder.
The decoder behavior depends on the packet loss burst type as follows. When an isolated small bursts appear, the decoder is able to apply error concealment mechanisms to repair the affected frames. The video quality decreases, and just after the burst, the reconstructed video quality recovers the quality by means of the random intracoded macroblock updating.
When the next I frame arrives, it completely stops error propagation. When the decoder faces large bursts, it stops decoding and waits until new packets arrive. This produces a sequence in the decoder that is shorter than the original one. Therefore, both sequences are not directly comparable with the QAM and so we freeze the last completely decoded frame until the burst ends.
Once we have comparable video sequences original and decoded video sequences with the same length , we are able to run the QAM. Each metric produces an objective quality value for each frame in its own scale. Figure 9 shows the objective quality value in the traditional PSNR scale at three different compression levels low compression, medium compression, and high compression during a large packet loss burst. We observe the evolution of quality during the burst period. What the observer sees during this large burst is a frozen frame, with more or less quality depending on the compression level.
The PSNR metric reports that quality drops drastically with the first frame affected by the burst and decrease even more as the difference between the frozen frame and the current frame increases. Nearly at the middle of the burst, an additional drop of quality can be observed.
It corresponds to a scene change with the beginning of a new cycle of the foreman video sequence. At this point, the drastic scene change makes the differences between sequences even higher, and the PSNR metric scores with even worse values, reaching values as low as 10—12 dBs. PSNR frame values during a long packet loss burst from frame to at different bitrates. On the other hand, the perceived quality which changes at these levels is quite difficult to evaluate. So, a better perceptually designed QAM should not score such a quality drop in this situation because quality saturates. When the burst ends, quality rapidly increases because of the arrival of packets belonging to the same frame number than the current one in the original sequence frame in Figure 9.
If during such a burst a QAM takes into account only the quality of the frozen frame, disregarding the differences with the original one which changes over time , the effect of the burst would remain unnoticed for that metric, that is, quality remain constant. There is a panel for each compression level: Figure 10 a corresponds to high compression, Figure 10 b to middle compression, and Figure 10 c to low compression. We observe some interesting behaviors that we proceed to analyze.
From a perceptual point of view, quality must drop to a minimum when one or more frames are lost completely and should remain that way until the data flow is recovered. It should not matter if a scene change takes place inside the large burst. The drop of quality to the minimum at the beginning of the burst evidence the lost of whole frames. NR metrics do not detect the presence of a frozen frame by dropping the quality score as expected because the quality given by these metrics remain at the level scored for the frozen frame during the burst duration.
So, NR metrics could not detect the beginning of a large burst, since lost frames will be replaced with the last correctly decoded frame frozen frame and the reference frames are not available for comparison. However, NR metrics detect the end of such bursts. Figure 11 will help us to explain this behavior, showing how reconstruction is done after a large burst. This figure shows the impairments produced when the large burst ends. Figure 11 a is the current frame, the one being transmitted. Figure 11 b is the frozen frame that was repeated during the burst duration.leondumoulin.nl/language/mystery/detections-in-a-scarlet.php
خرید آنلاین فایل Digital video quality: Vision models and metrics |اِکس تی
When the burst ends, the decoder progressively reconstruct the sequence using the intramacroblocks from the incoming video packets. So the decoder partially updates the frozen frame with the incoming intramacroblocks. This is shown in Figures 11 c and 11 d where the face of the foreman appears gradually. Frame reconstruction after a large burst: a original frame, b last frozen frame, and c d first and second reconstructed frames after the burst.
The gradual reconstruction of the frame with the incoming macroblocks is interpreted in a different way by NR metrics and FR metrics. When the macroblocks begin to arrive, what happens at frame see Figure 12 , the NR metrics react scoring down quality, while the FR metrics begin to increase their quality score, just the opposite behavior.
For a NR metric, without a reference frame, Figure 11 c has clearly worse quality than Figure 11 b. But for a FR metric the corresponding macroblocks between Figures 11 c and 11 a help to increase the scored quality. End of the large burst for the low compression panel. FR and NR metrics show the opposite behavior. So, NR metrics react only when the burst of lost packets affects frames partially, that is, isolated bursts and at the end of a large burst. When the frame is fully reconstructed then the score obtained with NR and FR metrics approaches again the values achieved before the burst, which depends on the compression rate.
These variations become more evident as the degree of compression decreases.
Digital Video Quality
The nature of the data sent through the ancillary channel, 18 scalar parameters obtained form the histogram of the wavelet subbands of the reference image, is very sensitive to loss of synchronism between the reference frame and the frozen one. On the decoder, the same extracted parameters are statistically compared with the received through the ancillary channel.
When this comparison is performed with two sets of parameters obtained from different frames, unexpected results appear. This behavior is the same regardless of the compression level inside the large burst. Figure 13 shows an isolated burst.
In this case, blur and edge shifting impairments are introduced altering only one frame. The error concealment mechanism of H. Figure 14 shows the original frame a and three subsequent frames b, c, d , where the effect of the lost packets is concealed by the H. Packet loss affecting only one frame. As defined previously, an isolated burst can affect one or two consecutive frames.
In the last case, the behavior of the QAM when facing the isolated burst resembles the behavior of the metrics with a large burst. The difference is that the concealment mechanisms and the correct reception of part of the frames avoid the largest drop in the quality. Figure 15 shows multiple consecutive bursts large and isolated that behave as exposed previously. From left to right, we see a large burst followed by an isolated one. This pattern repeats again one more time, and at the right most part of the figure, between frames and , two large bursts occur consecutively, having a gap between them where new incoming packets arrive for a short period of time frames and In the gap, the encoder is not able to reconstruct a whole frame because the gap is too small, that is, between the two large bursts only a small amount of packets arrive, and this is not enough to reconstruct a whole frame.
So the involved frames and are partially reconstructed Figures 17 b and 17 c.
Related Digital Video Quality: Vision Models and Metrics
Copyright 2019 - All Right Reserved