0% found this document useful (0 votes)
24 views8 pages

A Content-Aware Metric For Stitched Panoramic I - Supplement - 3

This document describes a content-aware metric for assessing the quality of stitched panoramic images. The metric combines a perceptual geometric error metric and a local structure-guided metric. The geometric error metric computes the local variance of optical flow field energy between distorted and reference images. The structure-guided metric computes intensity and chrominance gradients in highly structured patches. The two metrics are combined based on the amount of image structure, as estimated from originally captured viewpoint images. Experimental results show the two metrics complement each other and the combined metric achieves 94.36% precision against mean subjective opinion.

Uploaded by

b3d.dharohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views8 pages

A Content-Aware Metric For Stitched Panoramic I - Supplement - 3

This document describes a content-aware metric for assessing the quality of stitched panoramic images. The metric combines a perceptual geometric error metric and a local structure-guided metric. The geometric error metric computes the local variance of optical flow field energy between distorted and reference images. The structure-guided metric computes intensity and chrominance gradients in highly structured patches. The two metrics are combined based on the amount of image structure, as estimated from originally captured viewpoint images. Experimental results show the two metrics complement each other and the combined metric achieves 94.36% precision against mean subjective opinion.

Uploaded by

b3d.dharohar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2017 IEEE International Conference on Computer Vision Workshops

A Content-aware Metric for Stitched Panoramic Image Quality Assessment

Luyu Yang, Zhigang Tan, Zhe Huang Gene Cheung


Kandao Technology National Institute of Informatics, Japan
[email protected] [email protected]

Abstract [13], where the transform parameters (e.g., scaling, rota-


tion, shearing, etc) are computed by establishing correspon-
One key enabling component of immersive VR visual dence between features in two images’ overlapping spatial
experience is the construction of panoramic images— regions. Thus errors in this stage are primarily caused by the
each stitched into one large wide-angle image from mul- inaccuracy in estimated homographic transform parameters.
tiple smaller viewpoint images captured by different cam- This results in commonly observed ghosting and structure
eras. To better evaluate and design stitching algorithms, inconsistency visual artifacts, as shown in Fig.1.
a lightweight yet accurate quality metric for stitched Photometric correction targets errors due to hetero-
panoramic images is desirable. In this paper, we design a geneous imaging hardware or environmental conditions
quality assessment metric specifically for stitched images, among the capturing cameras. Typical errors include vi-
where ghosting and structure inconsistency are the most gnette and exposure unevenness, which can be removed ef-
common visual distortions. fectively using a number of post-processing techniques in
Specifically, to efficiently capture these distortion types, the literature, including [4, 8, 7]. We thus focus on dis-
we fuse a perceptual geometric error metric and a local tortions due to inaccurate estimation of homographic trans-
structure-guided metric into one. For the geometric er- form parameters in this paper.
ror, we compute the local variance of optical flow field en-
Among the diversity of stitching algorithm literatures,
ergy between the distorted and reference images. For the
many researchers choose to assess the stitched images by ei-
structure-guided metric, we compute intensity and chromi-
ther making comparisons subjectively [24, 14] or using con-
nance gradient in highly-structured patches. The two met-
ventional image quality assessment (IQA) metrics [12, 1].
rics are content-adaptively combined based on the amount
However, the problem of stitched image quality assessment
of image structures inherent in the 3D scene. Extensive ex-
(SIQA) differs from classical IQA in two main aspects.
periments are conducted on our stitched image quality as-
First, stitched image quality suffers severely from perspec-
sessment (SIQA) dataset, which contains 408 groups of ex-
tive, scaling and translation distortions, for which conven-
amples. Results show that the two parts of metrics comple-
tional IQA methods do not account. Second, instead of the
ment each other, and the fused metric achieves 94.36% pre-
globally diffused noise widely studied in the previous IQA
cision with the mean subjective opinion. Our SIQA dataset
works, the quality of stitched images is more affected by
is made publicly available as part of the submission.
local artifacts such as shape distortion and ghosting intro-
duced by blending surrounding pixels.
1. Introduction Contributions: We propose to combine a perceptual ge-
ometric error metric and a local structure-guided IQA met-
Recent rapid development of virtual reality (VR) tech- ric to form a new SIQA metric. To measure the geomet-
nologies has led to new immersive visual experiences, ren- ric errors, we compute the local variance of optical flow
dered using head-mounted displays like Occulus Rift. Real- field energy between the distorted and reference images.
time reconstruction of panoramic images is one key en- To measure the structure errors, we compute the intensity
abling component, where multiple small viewpoint im- and chrominance gradient in highly-structured patches. The
ages captured by an arrangement of cameras on the rig two metrics are combined in a content-adaptive manner,
are stitched together into one large wide-angle view [2, where the amount of image structure is first estimated from
24, 25, 11]. The stitching process can be broadly divided the originally captured viewpoint images, as illustrated in
into two parts: i) geometric alignment, and ii) photometric Fig. 2. Experimental results show that the two parts of met-
correction. Geometric alignment rectifies the perspectives rics complement each other, and the fused metric achieves
of the viewpoint images via homographic transformation 94.36% precision with the mean subjective opinion. We

2473-9944/17 $31.00 © 2017 IEEE 2487


DOI 10.1109/ICCVW.2017.293
Figure 1. Examples of typical distortions in stitched images. (a) textured scene with ghosting; (b) and (c) are ghosted areas with varying
intensity of distortion, distorted image is in red frame, and reference is in green; (d) structured scene with shape breakage; (e) and (f) are
the local areas with distorted structure.

also introduce a stitched image quality assessment (SIQA) as MSE (Mean Squared Error) [23], PSNR ((Peak Signal-
dataset, which contains 408 groups of examples with per- to-Noise Ratio) [17], SSIM (Structural Similarity index) [6]
spective variations, which is made publicly available as part and VSI (Visual Saliency Induced index) [26]. These are
of the submission. powerful metrics in conventional image quality evaluations,
The paper is organized as follows. Section 2 discusses and can effectively grade images generated by global noise
previous works in stitched image quality assessment. Sec- addition or various encoding methods, but not designed for
tion 3 introduces our proposed metric. Experimental results the problem of SIQA.
are presented in section 4, and section 5 draws the conclu- Previous SIQA metrics. Much previous SIQA met-
sion. rics payed more attention to photometric error assessment
[10, 13, 22] rather than geometric errors. In [10] and [22],
2. Related Work geometric error assessment is omitted and the metrics fo-
Compared with the rapid evolution of stitching algo- cus on color correction and intensity consistency. [13] try
rithms in the last decade, previous literatures on SIQA to quantize the geometric error by computing the structure
seems insufficient and lagged. The recent applications of similarity index (SSIM) of high-frequency information of
the stitching technique have redirected its emphasis, with the stitched and unstitched image difference in the over-
the auto-adaptive cameras and freely-assembled rigs gen- lapping region. However, since unstitched images used
eralized, the imaging condition has largely been improved for test are directly cropped from the reference and have
and photometric errors introduced on the hardware-level be- no perspective variations, the effectiveness of the method
come less a concerning issue. Meanwhile, the demand for is unproven. In [5] an omni-directional camera system of
VR experience increases the demand for high quality, full- full-perspective is considered, but the work pays more at-
perspective panorama in super resolution. tention to assessing video consistency among subsequent
Stitching algorithm evaluations. For stitching algo- frames and only adopted a luminance-based metric around
rithms, ghosting and structure inconsistency artifacts that the seam. In [16], the gradient of the intensity difference
cause large perceived errors and visual discomfort are major between the stitched and reference image is adopted to as-
challenges [21, 3]. To evaluate how effective the algorithms sess the geometric error, however, the experiments are con-
are as to resolving such errors, many literatures choose to ducted on mere 6 stitched examples, and more experiments
directly compare the stitched images and judge perceptually are conducted on conventional IQA datasets, which avoids
[24, 14]. The illustration is straight-forward but subjective, the important and dwells on the trivial.
and in many cases the comparison is conducted on limited IQA-related datasets. The absence of an SIQA dataset
number of examples, which makes the evaluation less con- benchmark is another evidence of the problem being un-
vincing. Another way to evaluate stitching algorithm is to derstudied. Compared with the popularity of conventional
adopt classical IQA metrics to stitched images [1, 12] such IQA datasets like LIVE database[15] or JPEG 2000[9], the

2488
Figure 2. The proposed procedure for stitched image quality assessment.

situation for SIQA problem is obviously a drawback for the of local patches as Eq.(1):
development of stitching algorithms. Therefore, establish- ⎛ ⎞
 P N2

ing a stitched image dataset of proper scale and formation
⎝ 1 2
is clearly a necessary move. Mlp = 2−1
|gi − μp | ⎠ (1)
p=1
N i=1

where P is the number of patches, N is the patch size, i is


3. Proposed Method the pixel index within each patch and μp is the mean mag-
nitude of patch p.
Perceptual geometric error metric. As mentioned ear- Although the distribution of geometric errors is charac-
lier, miscalculated correspondence between the unstitched terized as random, how human perceives the error is quite
viewpoint images is a major source of distortion for stitched attention-based. For stitched panorama with broad view
images that results in relative perspective, scale and transla- with such rich information, human visual perception plays
tion error. To estimate such errors, a perceptual geometric a more important role on how particular errors are dis-
error metric is proposed. First, we establish a dense cor- played and evaluated than normal-size images. To this end,
respondence between the stitched and reference images to a salient object detection model is applied to generate an
identify the transformation at the pixel level using optical attention-weighted map, which is characterized as Sp , for
flow. Considering the diversity of existing stitching algo- each reference image. Thus, the saliency guided geometric
rithms, displacement between the stitched and reference im- error metric Mg is summarized as Eq.(2):
ages might vary across spatial dimensions. Thus large dis- ⎛ ⎞
 P N2

placement optical flow (LDOF) [20] is adopted to calculate 1 2
Mg = Sp · ⎝ 2 |gi − μp | ⎠ (2)
point correspondence. The dense flow field is then obtained
p=1
N − 1 i=1
as motion vectors at each pixel, which is later utilized to
assemble the geometric error metric.
where Sp is the normalized saliency of the pth patch.
The magnitude of flow field reflects the intensity of geo- Structure-guided metric. Despite the measurement of
metric transformation from the stitched image to the refer- miscalculated correspondence using geometric error met-
ence image. However, what characterizes geometric distor- ric, shape and chrominance similarity are proven effective
tions is the relative perspective, scale and translation vari- means to assess noticeable structure distortions [16, 26].
ations, which are found in the local patches. Hence, the Hence we customize a structure-guided metric for SIQA
variance of flow in an N -by-N local patch is adopted to de- problems. First, we rectify the image perspective using the
scribe local geometric error. The error metric Mlp for each flow field obtained from previous steps. Then, we detect and
stitched image is then obtained by summing up the variance locate the structured areas as bounding-boxes, the visual

2489
saliency (VSI) method [26] is applied to each bounding- image. Here bins with large magnitude are considered an
box. VSI is an effective metric combining visual saliency, effective representation of structure, thus the structureness
edge similarity and chrominance consistency, which is in index ωstr is described as follows:
accordance with the desired measurement. Finally, we sum
B Btop
the index along the bounding-boxes to form the metric.  
We rectify the geometric differences by warping the ωstr = Bmag + Bmag (5)
i=1 i=1
stitched image to the reference image using the calculated
LDOF field. The structured areas are located using the line where B is the number of bins, 30 bins are divided in our ex-
segment detector (LSD) [19] method, and a bounding-box periment and Btop is the number of bins with top magnitude
is imposed around each line with sufficient length. For all and in this paper we adopt 5 as Btop . The structureness in-
the bounding-boxes representing structured area, we sum dex is normalized between [0, 1] using the min-max method
the visual saliency score Sbbox to form the structure-guided and then further rectified. Fig. 3 illustrates typical examples
metric Ms is presented in Eq.(3): computing structureness. Finally, the content-aware adap-
tive metric is composed as Eq.(6):
B

Ms = Sbbox (3) M = ωstr · Ms + (1 − ωstr ) · Mg (6)
b=1
.
where B is the number of detected bounding boxes in each
stitched example.
4. Experimentation
Due to the diversity of content in stitched images, how
structured the content is should be considered. A scene with In this paper, we introduce a stitched image quality as-
unregulated textures like trees or clouds have quite different sessment dataset benchmark called SIQA dataset. Exten-
noticeable error types from a structured scenes with walls sive experiments are conducted on the SIQA dataset, in-
and furnitures. For instance, line breakage is a more notice- cluding the comparison between our proposed metric and
able error type on the edge of a desk than a flower, while classical IQA metrics, the validation of each metric compo-
ghosting is more salient on a flower represented as “dupli- nent, and the contrast between fixed-weight and content-
cation” of the flower. As a result, it is necessary to first aware adaptive combination mechanism. To analyze the
decide how structured a scene is before error quantification. combined metric and how each component takes effect,
As discussed earlier, the geometric error metric quan- we also studied the specific examples using each compo-
tifies the misalignment, and hence is suitable for texture nent solely. Results show the effectiveness of the pro-
distortions like ghosting. On the other hand, the structure- posed content-aware metric, achieving 94.36% precision
guided metric characterizes the shape and color inconsis- compared with the mean subjective opinion score (MOS).
tency. To combine them in a content-aware pattern, we de-
sign a metric that quantifies the “structureness” of a scene. 4.1. SIQA Dataset Benchmark
In our work, a more structured scene is assumed to contain The first version of our SIQA dataset is based on syn-
more long straight lines. The number, length and distribu- thetic virtual scenes, since we try to evaluate the proposed
tion of straight lines are integrated to form the structure- metric for various stitching algorithms under ideal photo-
ness index. If a scene is containing numerous long straight metric conditions. The images are obtained by establish-
lines, the mean length μl is supposed larger. On the other ing virtual scenes with the powerful 3D model tool—Unreal
hand, larger μl could also indicate a scene with few but Engine. A synthesized 12-head panoramic camera is placed
extra-long lines, thus it is also necessary to divide μl by at multiple locations of each scene, covering 360 degree sur-
the length variance σ. Lines are segmented using the LSD rounding view, and each camera has an FOV (field of view)
method and pooled into a 30-dimension histogram accord- of 90 degree. Exactly one image is taken for each of the 12
ing to their phase, thus the magnitude for each bin is com- cameras at one location simultaneously. Each camera view
puted by Eq.(4): is used as a full reference of the stitched view of its left and
  right adjacent cameras, as demonstrated in Fig.4.
μl
Bmag = expLq /γ (4) SIQA dataset utilized twelve different 3D scenes vary-
σ+ ing from wild landscapes to structured scenes, two sets of
Q
stitched images are obtained using a popular off-the-shelf
where Q is the number of lines, and Lq is the length of q th stitching tool Nuke using different parameter settings, alto-
line within the bin. γ is the rectification parameter, used to gether 816 stitched samples, the original images are in high-
convert an unnormalized value to unit range; in this paper definition with 3k − by − 2k in size. Annotations from 28
we use one-tenth of the diagonal length for each stitched different viewers are integrated to decide on which one of

2490
Figure 4. The 12-head panoramic camera established in a vir-
tual scene using the Unreal Engine, and the formation of
stitched/reference image pairs for SIQA-dataset.

Metric Precision with MOS RMSE


VSI 0.8701 0.3604
SSIM 0.8162 0.4287
FSIM 0.8162 0.4287
GSM 0.8407 0.3991
SR-SIM 0.8333 0.4082
RF-SIM 0.6691 0.5752
Proposed 0.9436 0.2374

Table 1. Comparisons with the classical IQA metrics, best results


Figure 3. Examples of computing structureness index ωstr . (a) is a for classical IQA metrics and for all the evaluated metrics are high-
natural textured scene with relatively less structure; (b) is a natural lighted in bold text.
scene with more structure; (c) is an outdoor structured scene; (d)
is a structured indoor scene with high structureness index.
the-art SIQA methods. Third, evaluating the effectiveness
of each metric component and validating the effectiveness
the two stitched images is better, more than 10000 decisions of the combination mechanism.
are combined into mean subjective opinion (MOS), which Six widely-adopted IQA metrics are evaluated compar-
we later utilize as the ground-truth. ing with the proposed metric, as illustrated in Tab.1. Evalu-
The dataset is properly constructed both in formation and ated as single metric, VSI has better performance compared
in scale, and to the best of our knowledge, is also the first to others, yet the overall precision is unsatisfying since they
stitched image dataset considering perspective variations. are not designed for the stitched image evaluation problem.
From each perspective the images are provided separately. The state-of-the-art SIQA methods are also compared, as il-
The use of 12-head rig leads to cameras closely joined, with lustrated in Tab.2. The proposed method achieves the high-
larger overlap in the captures between the adjacent cameras est precision with MOS and lowest root-mean-square error
and smaller between non-adjacent ones, providing an option (RMSE).
considering overlap size. For saliency detection, we adopted a Minimum-
4.2. Experimental Results Spanning-Tree-based (MST) [18] method which is both ef-

We mainly conducted 3 groups of experiments on the


SIQA dataset. First, comparing our proposed metric with
the classical IQA metrics. Second, comparing with state-of-

2491
Metric Precision with MOS RMSE
Quereshi et al.[13] 0.5343 0.6824
Solh et al.[16] 0.8554 0.3803
Proposed 0.9436 0.2374

Table 2. Comparisons with state-of-the-art SIQA metrics, best re-


sults are high-lighted in bold text.

Metric Without Quarter-size Half-size Origin-size


Mg 0.7034 0.7696 0.7770 0.7868

Table 3. The saliency map fineness applied to Mg and the corre-


spondingly achieved precision.

Metric component Precision with MOS


Mg 0.7868
Ms 0.9167
Fixed combine 0.9216
Content-aware combine 0.9436

Table 4. Individual metric component evaluation.

fective and basically real-time. As mentioned in the previ-


ous section, the calculated saliency magnitude is summed-
up and normalized in local patches with the size N = 32
in Eq.(1), and then used as the perceptual weight. As much
previous work suggested, saliency guidance serves a posi-
tive but non-dominant role in IQA-related problems. The
parameters we used are default as suggested by the author.
Meanwhile, it is observed that the fineness of the saliency
map is positively-correlated, as illustrated in Tab.3.
The structure-guided metric is obtained by computing
intensity and chrominance gradient around local structured
patches. As mentioned earlier, the structured areas are lo-
cated by lines detection using the LSD method. Finally, VSI
is computed in each bounding box imposed around each re-
served line. Fig.5 illustrates a typical example under this
process.
In previous section, we propose to adaptively com-
bine the geometric error metric and structure-guided met-
ric according to scene structureness. To validate the pro-
posed idea, contrast experiments are conducted including
using the geometric error metric and structure-guided met- Figure 5. An example of structure guidance for computing local
ric solely, combining them with a fixed-weight mecha- image quality assessment. (a) is image after LSD detection, the
nism, and using the content-adaptive way. The fixed weight red lines are the detection results; (b) is the result after trivial
adopted in this experiment is 0.5 and 0.5. As illustrated in structures being removed; (c) is the image with bounding boxes
of structured area, high-lighted in yellow; (d) is the amplified ex-
Tab.4, the results show that combining the two components
amples of bounding box.
promotes the precision of the assessment, and best result
is achieved using the content-aware adaptive combination,
hitting 94.36% precision with the MOS.
Though comparisons clearly reveal the effectiveness of To this end, a close observation is conducted among the
the proposed method, we still need to validate that the two examples for which one component works but another one
components are practically complementary to each other. fails. Part of the examples are illustrated in Fig.6, we ob-

2492
Figure 6. Examples of the two metric components complement each other. (a) is the example that geometric error metric score the stitched
image 1 higher, yet the local structure-guided metric score image 1 lower; (b) is the example which structure-guided metric score image 1
higher but the geometric error metric vice versa.

serve that in unstructured scenes like (a) when two stitched dataset, which we introduce as a dataset benchmark for
images have very similar structure, even similar distortions, SIQA problems. The large-scale dataset is laboriously con-
attention-based IQA metric fails while geometric error met- structed and is made publicly available for researchers in
ric successfully scored image 1 higher since the geometric the VR community for further research.
distance error between image 1 and reference is relatively
smaller. In structured scenes like (b) where diverse edge References
breakage and shape distortion exist, geometric error metric
fails to evaluate the differences while the structure-guided [1] E. Adel, M. Elmogy, and H. Elbakry. Image stitching
based on feature extraction techniques: a survey. Interna-
metric successfully captured the distorted areas, thus pro-
tional Journal of Computer Applications (0975-8887) Vol-
viding better decisions. Based on observation through such ume, 2014.
examples, the correctness of our previous conception that [2] M. Brown and D. G. Lowe. Automatic panoramic image
the two component complement each other shows. stitching using invariant features. International journal of
computer vision, 74(1):59–73, 2007.
5. Conclusion [3] C.-H. Chang, Y. Sato, and Y.-Y. Chuang. Shape-preserving
half-projective warps for image stitching. In Proceedings
We propose a quality assessment metric specifically de- of the IEEE Conference on Computer Vision and Pattern
signed for stitched images. We first analyze different er- Recognition, pages 3254–3261, 2014.
ror types typically encountered in image stitching, including [4] M. Harville, B. Culbertson, I. Sobel, D. Gelb, A. Fitzhugh,
how the errors are generated and rendered, and then arrive at and D. Tanguay. Practical methods for geometric and pho-
the most common visual distortions in SIQA—ghosting and tometric correction of tiled projector. In Computer Vision
and Pattern Recognition Workshop, 2006. CVPRW’06. Con-
structure inconsistency. To effectively characterize these
ference on, pages 5–5. IEEE, 2006.
distortion types, we propose to adaptively fuse a perceptive
[5] S. Leorin, L. Lucchese, and R. G. Cutler. Quality assessment
geometric error metric and a structure-guided metric.
of panorama video for videoconferencing applications. In
To capture perceptual ghosting which is mostly caused Multimedia Signal Processing, 2005 IEEE 7th Workshop on,
by geometric misalignment, we compute the local variance pages 1–4. IEEE, 2005.
of optical flow field energy between the distorted and refer- [6] L. Liu, H. Dong, H. Huang, and A. C. Bovik. No-reference
ence images, guided by detected saliency. For structure in- image quality assessment in curvelet domain. Signal Pro-
consistency, a powerful intensity and chrominance gradient cessing: Image Communication, 29(4):494–505, 2014.
index VSI is adopted and customized around the highly- [7] Y. Liu and B. Zhang. Photometric alignment for surround
structured areas of the stitched images. Based on under- view camera system. In Image Processing (ICIP), 2014
standing of the different purposes of these two metrics, we IEEE International Conference on, pages 1827–1831. IEEE,
propose to use a content-adaptive combination according to 2014.
the specific scene structure. Experimental results show the [8] S. Lu and C. L. Tan. Thresholding of badly illuminated doc-
ument images through photometric correction. In Proceed-
effectiveness of our proposed metric and confirm the cor-
ings of the 2007 ACM symposium on Document engineering,
rectness of the combination mechanism. The metric can be pages 3–8. ACM, 2007.
used to optimize various stitching algorithms. [9] A. K. Moorthy and A. C. Bovik. Blind image quality as-
Extensive experiments are conducted using our SIQA sessment: From natural scene statistics to perceptual quality.

2493
IEEE transactions on Image Processing, 20(12):3350–3364, [25] F. Zhang and F. Liu. Parallax-tolerant image stitching. In
2011. Proceedings of the IEEE Conference on Computer Vision
[10] P. Paalanen, J.-K. Kämäräinen, and H. Kälviäinen. Image and Pattern Recognition, pages 3262–3269, 2014.
based quantitative mosaic evaluation with artificial video. [26] L. Zhang, Y. Shen, and H. Li. Vsi: A visual saliency-induced
In Scandinavian Conference on Image Analysis, pages 470– index for perceptual image quality assessment. IEEE Trans-
479. Springer, 2009. actions on Image Processing, 23(10):4270–4281, 2014.
[11] F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann,
O. Wang, S. Watson, and M. Gross. Panoramic video from
unstructured camera arrays. In Computer Graphics Forum,
volume 34, pages 57–68. Wiley Online Library, 2015.
[12] Y. Qian, D. Liao, and J. Zhou. Manifold alignment based
color transfer for multiview image stitching. In Image Pro-
cessing (ICIP), 2013 20th IEEE International Conference
on, pages 1341–1345. IEEE, 2013.
[13] H. Qureshi, M. Khan, R. Hafiz, Y. Cho, and J. Cha. Quanti-
tative quality assessment of stitched panoramic images. IET
Image Processing, 6(9):1348–1358, 2012.
[14] C. Richardt, Y. Pritch, H. Zimmer, and A. Sorkine-Hornung.
Megastereo: Constructing high-resolution stereo panoramas.
In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 1256–1263, 2013.
[15] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik. Live
image quality assessment database release 2. 2005.
[16] M. Solh and G. AlRegib. Miqm: A novel multi-view images
quality measure. In Quality of Multimedia Experience, 2009.
QoMEx 2009. International Workshop on, pages 186–191.
IEEE, 2009.
[17] A. Tanchenko. Visual-psnr measure of image quality. Jour-
nal of Visual Communication and Image Representation,
25(5):874–878, 2014.
[18] W.-C. Tu, S. He, Q. Yang, and S.-Y. Chien. Real-time salient
object detection with a minimum spanning tree. In Proceed-
ings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 2334–2342, 2016.
[19] R. G. von Gioi, J. Jakubowicz, J.-M. Morel, and G. Ran-
dall. Lsd: A fast line segment detector with a false detection
control. IEEE transactions on pattern analysis and machine
intelligence, 32(4):722–732, 2010.
[20] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid.
Deepflow: Large displacement optical flow with deep match-
ing. In Proceedings of the IEEE International Conference on
Computer Vision, pages 1385–1392, 2013.
[21] T. Xiang, G.-S. Xia, and L. Zhang. Image stitch-
ing with perspective-preserving warping. arXiv preprint
arXiv:1605.05019, 2016.
[22] W. Xu and J. Mulligan. Performance evaluation of color
correction approaches for automatic multi-view image and
video stitching. In Computer Vision and Pattern Recognition
(CVPR), 2010 IEEE Conference on, pages 263–270. IEEE,
2010.
[23] W. Xue, L. Zhang, X. Mou, and A. C. Bovik. Gradient mag-
nitude similarity deviation: A highly efficient perceptual im-
age quality index. IEEE Transactions on Image Processing,
23(2):684–695, 2014.
[24] J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter. As-
projective-as-possible image stitching with moving dlt. In
Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 2339–2346, 2013.

2494

You might also like