Color and Geometry Texture Descriptors For Point-Cloud Quality Assessment
Color and Geometry Texture Descriptors For Point-Cloud Quality Assessment
28, 2021
Abstract—Point Clouds (PCs) have recently been adopted as describe a 3D scene, PCs require a large number of points,
the preferred data structure for representing 3D visual contents. which limits their use in current multimedia applications. As a
Examples of Point Cloud (PC) applications range from 3D repre- consequence, new technologies are being developed to capture,
sentations of small objects up to large scenes, both still or dynamic
in time. PC adoption triggered the development of new coding, process, transmit, and display this type of media. For example,
transmission, and display methodologies that culminated in new MPEG Immersive Media (MPEG-I) presented two standards
international standards for PC compression. Along with these, in for PC coding. One of them is the V-PCC (ISO/IEC 23090-5),
the last couple of years, novel methods have been developed for which relies on traditional video encoding techniques, and the
evaluating the visual quality of PC contents. This paper presents other one is the G-PCC (ISO/IEC 23090-9), which encodes the
a new objective full-reference visual quality assessment metric
for static PC contents, named BitDance, which uses color and geometry and color information as separate entities [2].
geometry texture descriptors. The proposed method first extracts The development of coding algorithms and transmission pro-
the statistics of color and geometry information of the reference tocols for PC contents have triggered the development of quality
and test PCs. Then, it compares the color and geometry statistics assessment methods specifically designed for PC contents. Some
and combines them to estimate the perceived quality of the test subjective quality experiments have been performed with the
PC. Using publicly available PC quality assessment datasets, we
show that the proposed PC quality assessment metric performs goal of understanding how humans perceive immersive media
very well when compared to state-of-the-art quality metrics. In in 6 Degree-of-Freedom (6DoF) environments and what are the
particular, the method performs well for different types of PC impacts of different rendering and compression techniques on
datasets, including the ones where both geometry and color are the perceived visual quality [3]. Following these studies, Point
not degraded with similar intensities. BitDance is a low complexity Cloud Quality Assessment (PCQA) objective metrics based on
algorithm, with an optimized C++ source code that is available for
download at github.com/rafael2k/bitdance-pc_metric. point distance measurements have been proposed [4]. These
point-based PCQA metrics can be divided into the following
Index Terms—Quality assessment, point clouds, color texture types: Point-to-Point (Po2Point), Point-to-Plane (Po2Plane),
analysis, geometric texture analysis.
Point-to-Surface (Po2Surface), and Plane-to-Plane (Pl2Plane).
These metrics establish correspondences between the reference
I. INTRODUCTION and (possibly) degraded PCs and measure the distances between
ECENT technology advancements have driven the pro- the corresponding points/surfaces/planes to estimate the PC
R duction of plenoptic devices that capture and display visual
contents, not only as texture information (as in 2D images)
quality. In the case of Pl2Plane metrics [5], the angular similarity
between the corresponding tangent planes of reference and
but also as 3D texture-geometric information. These devices distorted PC is computed to quantify their quality differences.
represent the visual information using an approximation of Point-based PCQA metrics are also known as MPEG metrics
the plenoptic illumination function, which can describe visi- because MPEG has made available their reference implementa-
ble objects from any point in the 3D space [1]. Depending tion [6]. Until recently, point-based metrics were considered the
on the capturing device, this approximation can correspond to state-of-the-art in PCQA [7].
holograms, light fields, or PC imaging formats. Among these, More recently, different PCQA metric approaches have been
PC formats have recently become one of the first choices to proposed. Javaheri et al. [8], [9] proposed metrics based on the
represent still and dynamic 3D visual contents. These formats Hausdorff and Mahalanobis distances. Viola et al. proposed a
consist of a collection of points in a 3D space, with their corre- metric that combines color and geometry information to obtain
sponding position and visual attribute information. To accurately a global quality score. Their metric takes into account the color
statistics by analyzing the color histograms and the correlo-
Manuscript received March 11, 2021; revised May 29, 2021; accepted June grams [10]. Meynet et al. [11] proposed a metric that also takes
3, 2021. Date of publication June 9, 2021; date of current version June 17, 2021.
This work was supported by FAP-DF, CAPES, CNPq and UnB. The associate into consideration geometry and color features, using a logistic
editor coordinating the reviewof this manuscript and approving it for publication regression function to combine these features and produce a
was Prof. Saurabh Prasad. (Corresponding author: Rafael Diniz) quality estimate. Alexiou et al. [12] also proposed a PCQA
Rafael Diniz and Pedro Garcia Freitas are with the Department of Com-
puter Science, University of Brasília, Brasilia 70910-900, Brazil (e-mail: metric that extracts local color and geometry features. Yang
[email protected]; [email protected]). et al. [13] uses graph-based relations among points in the PC
Mylène C. Q. Farias is with the Department of Electrical Engineering, Uni- to estimate quality. Other works include 2D projection-based
versity of Brasília, Brasilia 70853060, Brazil (e-mail: [email protected]).
Digital Object Identifier 10.1109/LSP.2021.3088059 approaches [14] and machine learning-based approaches [15].
1070-9908 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: University Roma Tre AREA SCIENTIFICO TECNOLOGICA. Downloaded on December 11,2023 at 04:20:27 UTC from IEEE Xplore. Restrictions apply.
DINIZ et al.: COLOR AND GEOMETRY TEXTURE DESCRIPTORS FOR POINT-CLOUD QUALITY ASSESSMENT 1151
Finally, our previous work consisted of using color-based de- where S is the number of points of the PC, k is a constant that
scriptors to estimate PC quality [16]–[18]. can take different values (an ES multiplier), Pn is the n-th point
In this paper, we target the issue of estimating the quality of the PC, Ni (Pn ) are the coordinates of the i-th point nearest
of PCs degraded by generic distortion types by proposing new to Pn , and knn is the total number of nearest neighbors. The
texture descriptors that are able to better capture color and function d(Pa , Pb ) computes the Euclidean distance between
geometry distortions. The main contribution of this work is points Pa and Pb .
the design of a full-reference PCQA metric that independently In the voxelization step, eq. 1 is used to extract the PC geom-
extracts geometry and color PC features with these proposed etry parameters and find the voxel size. As discussed in previous
descriptors. The extracted color and geometry features of ref- works [17], [18], considering that the PC rendering process does
erence and distorted PCs are then compared to estimate the some kind of voxelization, this voxelization step may affect the
perceived quality. To test the proposed metric, we used four color texture descriptor and, consequently, the metric perfor-
recent public PC quality datasets. Therefore, another contri- mance. Therefore, to choose the best voxelization parameters
bution of this work is the performance evaluation of not only for the proposed metric, we tested different neighborhood sizes
the proposed PCQA metric but also of other PCQA metrics, (knn ) and multiplier values (k), and opted for the combination
which is carried out over a much more diverse set of PC contents that resulted in better and more reliable results, which is k = 6.0
and degradations than what is currently found in the literature. and knn = 8. A previous analysis of the influence of voxelization
The C++ source code of BitDance is available for download at parameters on color-based PC descriptors was performed in a
github.com/rafael2k/bitdance-pc_metric. previous work [17].
After the voxelization step, the color texture descriptor is
II. PROPOSED METHOD applied to the voxelized PC. The proposed color descriptor takes
into consideration the perceptual color differences between the
Considering that PCs have at least two types of information voxel point and its neighbors. For this, we use the CIEDE2000
(color and geometry) per point, the main idea of the proposed (CIELab ΔE 2000) [19] color distance metric, which is more
PCQA method - BitDance - is to use color and geometry descrip- advanced than its predecessor color-difference metrics CIELAB
tors to independently extract PC features from the reference and ΔE*ab and CIE944, providing perceptually uniform color dis-
test PCs. Fig. 1 shows the block diagram of the proposed method, tances. For each voxel Pn , we compute the CIEDE2000 dis-
which is divided into the following stages: (1) color feature tances between this voxel and each of its N -nearest neighbors
extraction, (2) geometry feature extraction, (3) computation of voxels Pi . Then, based on these distances, we compute a label
feature map histograms and distances, and (4) quality model. L of B bits for each PC voxel.
In this section, we describe each of the stages depicted in this The label L is calculated by computing, for all N neighbors of
figure. the voxel Pn , its CIEDE2000 distance C[i] to each i-th neighbor
Pi (1 ≤ i ≤ N ). Initially, we set L equal to zero. Then, for each
A. Color Feature Extraction of the N neighbors, the following equation is applied iteratively:
The color feature extraction stage includes voxelization and L ∨ (1 C[i]−2.5 ), if 2.5 ≤ C[i] < 20.0;
L= 2.5 (2)
descriptor application steps. Voxelization is the process of spa-
L ∨ (1 7), if C[i] ≥ 20.
tially discretizing the original PC points from a continuous 3D
space to a discrete 3D space. The elements in the discrete 3D
where the symbol ∨ is a bitwise OR and is a bitwise left shift.
space grid are known as voxels, which can be either ‘empty’
After all neighbors are analyzed, a final 8-bits (binary) label L
or ‘occupied’ by a color value. The voxel size (VS) is obtained
is obtained.
computing the Edge Size cube (ES):
This process generates binary frequency values for the color
distance intervals, which indicate if there is at least one neighbor-
k
S knn
1 ing voxel at this interval distance. If the label L corresponding to
ES = · · d (Ni (Pn ), Pn ) , (1)
S n=1 knn i=1 a particular voxel is a small number, this means most neighboring
Authorized licensed use limited to: University Roma Tre AREA SCIENTIFICO TECNOLOGICA. Downloaded on December 11,2023 at 04:20:27 UTC from IEEE Xplore. Restrictions apply.
1152 IEEE SIGNAL PROCESSING LETTERS, VOL. 28, 2021
voxels have similar a color to the central voxel. On the other C. Histogram Distance Measurement
hand, if L is a large number there are neighboring voxels that
As described in previous sections, for each target point Pn of
generate big color differences. the PC we compute both color and geometry features, obtaining
In this work, we use N = 12 and B = 8 bits. It is worth
two labels for each point in a PC. After computing the color and
pointing out that we have tested different N (6 to 12) and B
geometry labels associated with all PC points, histograms of the
values (8, 12, and 16 bits), but the results of these tests are labels are computed independently for color and geometry, as
not presented here for lack of space. As mentioned earlier, a
follows:
previous analysis of similar parameters was performed in a
previous work [17]. We chose the best combination by testing the h = {h[l0 ], h[l1 ], h[l2 ], h[l3 ], · · · }, (4)
different parameters of the PC color-based distances averaged
where h[lj ] corresponds to the frequency of the label lj , which
over different variations of geometry-based distances, as detailed
is computed as follows:
next.
S−1
h[lj ] = δ(L(Pn ), lj ), (5)
B. Geometry Feature Extractor n=0
The goal of the geometry feature extractor is to extract in- where S is the number of PC points and δ is an impulse function.
formation about the geometry of the PC points. This descriptor The histograms are calculated for color and geometry features,
uses the PC normal vectors, which are vectors orthogonal to both for reference (CHr and GHr ) and degraded (CHd and
the local surface where the PC point is located. Since typical GHd ) PCs. Then, these histograms are compared using the
PC acquisition apparatus do not capture normal vectors, with Jensen-Shannon distance [20], obtaining separate color distance
only depth-plus-color information being generally available, the (CJS ) and geometry distance (GDJS ) values. To obtain a single
method has to compute the normal values from the eigenvectors distortion measure D that represents both color and geometry PC
of the local neighborhood 3D coordinates. To compute the degradations, we simply average these two distances, as shown
normal vectors for each PC point, the method considers a local in Fig. 1. This combined distance value represents how degraded
neighborhood with at most 16 points, which are located inside a PCd is when compared to the reference PCr .
radius of 6 times the average distance of the 8 closer neighbors.
To overcome the fact that each PC point has 2 normal vectors that D. Quality Regression Model
correctly represent a tangent plane normal, we oriented all PC
normals to the direction (0, 0, 1) and normalized the magnitude After obtaining the single distortion measure D, by simply
normal values to ‘1’. averaging the color and geometry histogram distances, we use
For each point Pn in a PC, we define the distance between a regression model to estimate the perceived quality. In quality
Pn ’s normal and each of the N -nearest neighbors Pi ’s normals, assessment methodologies, the regression model is often used
as the distance between two 3D normal vectors: to adjust the subjective quality scores provided by the different
quality datasets. In this work, we use the least-squares to fit
3 the data into the Logistic function. This model models how the
G= (vnd − vid )2 human visual system perceives the different levels of distortions
d=1
and, therefore, how the distance metrics are mapped into sub-
where vnd is the normalized normal vector of point Pn , vid is jective quality scores [17].
the normal vector of a neighbor Pi , and d represents each of the
3 dimensions (x, y, z) of a normal vector. Considering that the III. EXPERIMENTAL SETUP
normalized normals range from 0 to 1, the maximum possible We used four datasets, with associated subjective scores, in
distance between normals is 2. our simulations [14], [21]–[23]. Follows a description of the
After the normal distances are computed, we create a label of contents and distortions contained in these datasets.
B bits for each point. We adopted B = 16 bits and N = 6 in this r D1 (Torlig 2018 [14]): This dataset includes human bodies
work, after tests with different values (as mentioned earlier). For and inanimate objects. Distortions were produced using an
a given point Pn , its label L is computed through the iteration octree-based codec, with color attributes encoded using the
of the distances G[i] of each i-th nearest neighbors of Pn , as JPEG at different quantizer levels.
follows: r D2 (Alexiou 2019 [21]): This dataset contains objects,
⎧ full-bodies, and also a human head. The distortions were
⎪
⎪ L ∨ 1, 0.05 ≤ G[i] < 0.10;
⎪
⎪ generated by the MPEG PC codecs, namely the video-
⎪
⎪ L ∨ 1 1, 0.1 ≤ G[i] < 0.175;
⎪
⎪ based point cloud codec (V-PCC) and four variants of the
⎨L ∨ 1 2, 0.175 ≤ G[i] < 0.275; geometric-based point cloud codec (G-PCC).
L= r D3 (Stuart 2020 [22]): This dataset contains human full-
⎪
⎪
G[i]−0.275
L ∨ 1 0.125 + 3, 0.275 ≤ G[i] < 1.65;
⎪
⎪
⎪
⎪ L ∨ 1 14, 1.65 ≤ G[i] < 1.80; bodies and upper bodies. Distortions were created with the
⎪
⎪
⎩ MPEG encoders, by the variants V-PCC and G-PCC.
L ∨ 1 15, 1.65 ≤ G[i] ≤ 2.0. r D4 (Yang 2020 [23]): The dataset contains human full-
(3) bodies, objects, and small scenes with many objects. Seven
Authorized licensed use limited to: University Roma Tre AREA SCIENTIFICO TECNOLOGICA. Downloaded on December 11,2023 at 04:20:27 UTC from IEEE Xplore. Restrictions apply.
DINIZ et al.: COLOR AND GEOMETRY TEXTURE DESCRIPTORS FOR POINT-CLOUD QUALITY ASSESSMENT 1153
TABLE I
PERFORMANCE OF OUR METRIC PROPOSAL AND OTHER METRICS THE DIFFERENT DATASETS
types of distortions were used: Octree-based compression, case of dataset D4, by far the largest dataset with the most diverse
Color Noise, Downscaling, Downscaling plus Color noise, types of distortions, BitDance outperforms all the other metrics.
Downscaling plus Geometry Gaussian noise, Geometry The last column of Table I shows the average results. We
Gaussian noise, and finally Color noise plus Geometry can see that BitDance, PointSSIM, and PCQM are the three
Gaussian noise. best-performing metrics. Among the MPEG PCQA metrics,
Notice that D2 and D3 contain only MPEG PC compression po2plane_MSE is the best-performing one, which is in agree-
distortions, while D1 and D4 have a more diverse set of distor- ment with a recent study by Perry et al. [22]. BitDance provides
tions, with D4 being the larger and more complete dataset in the best PCC and PointSSIM-Color delivers the second best.
terms of different types of distortions. PCQM has the best SROCC, with BitDance as a close second
For performance analysis, we used the following PCQA met- best. Finally, BitDance provides the lowest RMSE both on
rics as benchmark: the set of point-based MPEG metrics [24], average and in most datasets.
PCQM [11], and PointSSIM [12]. Independent Y, Cb, and Cr
color error components, used by some MPEG proposed metrics,
were combined using the function proposed by Ohm et al. [25]. V. CONCLUSION
Currently, PCQM and PointSSIM are considered state-of-the-art
metrics. While the MPEG metrics reference implementation In this paper, we presented a PCQA metric, named BitDance,
and PCQM provide single distance values between reference that uses local and global statistics to assess the PC quality.
and test PC, PointSSIM provides independent distances ac- BitDance uses two new color and geometry texture descriptors.
cording to the selected feature extractor - geometry or color. The statistics of the outputs of these descriptors were compared
We compared the predicted scores with the subjective scores using the Jensen-Shannon metric and, then, the computed ge-
provided in the datasets using Spearman’s Rank Correlation ometry and color distances were combined. Finally, a logistic
Coefficient (SROCC), Pearson’s Correlation Coefficient (PCC), function was fitted to this data to produce a quality estimate.
and Root-Mean-Square Error (RMSE) as performance metrics. One of the side effects of the way we calculated the statistics
is that the geometry and color feature extractors are invariant to
the PC local topology and, as a consequence, they are rotation
and scale-invariant.
IV. NUMERICAL RESULTS BitDance was compared to other state-of-the-art PCQA met-
Table I shows the PCC, SROCC, and RMSE results obtained rics using four different PC quality datasets. Results showed that
for the proposed BitDance metric, the MPEG PCQA metrics [6], our strategy of using low complexity bit-shift operations pro-
PCQM [11], and PointSSIM [12]. The best results are shown in vided a good and robust accuracy that outperformed all MPEG
bold, while the second-best results are shown in italics. In the PCQA metrics and presented similar results to state-of-the-art
case of dataset D1, which contains two types of distortions, Bit- PCQA metrics. More specifically, BitDance performed well in
Dance and PCQM perform similarly, with BitDance having the both types of datasets: those with only MPEG compression
best PCC and RMSE values and PCQM having the best SROCC distortions and those with more generic distortions. The low
value. For datasets D2 and D3, which contain only MPEG PC complexity of BitDance is an important and appealing feature
compression distortions, PointSSIM-Color and po2plane_MSE because PC contents have lots of data and most multimedia
deliver the best PCC, respectively, while PointSSIM-Color and applications cannot handle algorithms with high computational
PCQM provide the best SROCC, respectively. For these two cost. The C++ source code of BitDance is available for download
datasets, BitDance presents a competitive performance. In the at github.com/rafael2k/bitdance-pc_metric.
Authorized licensed use limited to: University Roma Tre AREA SCIENTIFICO TECNOLOGICA. Downloaded on December 11,2023 at 04:20:27 UTC from IEEE Xplore. Restrictions apply.
1154 IEEE SIGNAL PROCESSING LETTERS, VOL. 28, 2021
REFERENCES [15] Y. Liu, Q. Yang, Y. Xu, and L. Yang, “Point cloud quality assess-
ment: Large-scale dataset construction and learning-based no-reference
[1] E. H. Adelson et al. “The plenoptic function and the elements of early approach,” 2020, arXiv:2012.11895.
vision,” in Computat. Models Visual Process.. Cambridge, MA, USA: MIT [16] R. Diniz, P. G. Freitas, and M. C. Farias, “Multi-distance point cloud
Press, 1991, pp. 3–20. quality assessment,” in Proc. IEEE Int. Conf. Image Process., 2020, pp. 1–
[2] D. Graziosi, O. Nakagami, S. Kuma, A. Zaghetto, T. Suzuki, and A. 5.
Tabatabai, “An overview of ongoing point cloud compression standard- [17] R. Diniz, P. G. Freitas, and M. C. Farias, “Local luminance patterns
ization activities: Video-based (V-PCC) and geometry-based (G-PCC),” for point cloud quality assessment,” in Proc. IEEE 22nd Int. Workshop
APSIPA Trans. Signal Inf. Process., vol. 9, 2020, Art. no. e13. Multimedia Signal Process., 2020, pp. 1–6.
[3] L. Cruz et al., “Point cloud quality evaluation: Towards a definition for test [18] R. Diniz, P. G. Freitas, and M. C. Farias, “Towards a point cloud quality
conditions,” in Proc. 11th Int. Conf. Qual. Multimedia Experience, 2019, assessment model using local binary patterns,” in Proc. 12th Int. Conf.
pp. 1–6. Qual. Multimedia Experience, 2020, pp. 1–6.
[4] F. Pereira, “Point cloud quality assessment: Reviewing objective metrics [19] M. R. Luo, G. Cui, and B. Rigg, “The development of the cie 2000 colour-
and subjective protocols,” ISO/IEC JTC1/SC29/WG1 M78036. JPEG. difference formula: Ciede2000,” Color Res. Appl.: Endorsed Inter-Soc.
JPEG, 2018, pp. 1–8. Color Council, Colour Group (Great Britain), Can. Soc. Color, Color Sci.
[5] E. Alexiou and T. Ebrahimi, “Point cloud quality assessment metric based Assoc. Japan, Dutch Soc. Study Color, Swedish Colour Centre Found.,
on angular similarity,” in Proc. IEEE Int. Conf. Multimedia Expo, 2018, Colour Soc. Aust., Centre Français Couleur, vol. 26, no. 5, pp. 340–350,
pp. 1–6. Oct. 2001.
[6] D. Tian, H. Ochimizu, C. Feng, R. Cohen, and A. Vetro, “Up- [20] D.-D. Shi, D. Chen, and G.-J. Pan, “Characterization of network complex-
dates and integration of evaluation metric software for PCC,” ISO/IEC ity by communicability sequence entropy and associated jensen-shannon
JTC1/SC29/WG11 input document MPEG2017 M, vol. 40522, 2017. divergence,” Phys. Rev. E, vol. 101, no. 4, 2020, Art. no. 042305.
[7] A. Javaheri, C. Brites, F. M. B. Pereira, and J. M. Ascenso, “Point cloud [21] E. Alexiou, I. Viola, T. M. Borges, T. A. Fonseca, R. L. De Queiroz, and
rendering after coding: Impacts on subjective and objective quality,” IEEE T. Ebrahimi, “A comprehensive study of the rate-distortion performance
Trans. Multimedia, to be published, doi: 10.1109/TMM.2020.3037481. in mpeg point cloud compression,” APSIPA Trans. Signal Inf. Process.,
[8] A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, “A generalized hausdorff vol. 8, 2019, Art. no. e27.
distance based quality metric for point cloud geometry,” in Proc. 12th Int. [22] S. Perry et al., “Quality evaluation of static point clouds encoded using
Conf. Qual. Multimedia Experience, 2020, pp. 1–6. mpeg codecs,” in Proc. IEEE Int. Conf. Image Process., 2020, pp. 3428–
[9] A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, “Mahalanobis based 3432.
point to distribution metric for point cloud geometry quality evaluation,” [23] Q. Yang, H. Chen, Z. Ma, Y. Xu, R. Tang, and J. Sun, “Predict-
IEEE Signal Process. Lett., vol. 27, pp. 1350–1354, 2020. ing the perceptual quality of point cloud: A 3D-to-2D projection-
[10] I. Viola, S. Subramanyam, and P. Cesar, “A color-based objective quality based exploration,” IEEE Trans. Multimedia, to be published, doi:
metric for point cloud contents,” in Proc. 12th Int. Conf. Qual. Multimedia 10.1109/TMM.2020.3033117.
Experience, 2020, pp. 1–6. [24] D. Flynn, R. Julien, D. Tian, R. Mekuria, C. Jean-Claude, and
[11] G. Meynet, Y. Nehme, J. Digne, and G. Lavoué, “PCQM: A full-reference V. Valentin, “Mpeg’s Pcc Metric Version 0.13.5,” Mar. 2020,
quality metric for colored 3D point clouds,” in Proc. 12th Int. Conf. Qual. [Online]. Available: https://github.com/rafael2k/bitdance-pc_metric/tree/
Multimedia Experience, 2020, pp. 1–6. main/mpeg-pcc-dmetric-0.13.05
[12] E. Alexiou and T. Ebrahimi, “Towards a point cloud structural similarity [25] J.-R. Ohm, G. J. Sullivan, H. Schwarz, T. K. Tan, and T. Wiegand,
metric,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops, 2020, pp. 1– “Comparison of the coding efficiency of video coding standards-including
6. high efficiency video coding (HEVC),” IEEE Trans. Circuits Syst. Video
[13] Q. Yang, Z. Ma, Y. Xu, Z. Li, and J. Sun, “Inferring point cloud quality via Technol., vol. 22, no. 12, pp. 1669–1684, Dec. 2012.
graph similarity,” IEEE Trans. Pattern Anal. Mach. Intell., to be published,
doi: 10.1109/TPAMI.2020.3047083.
[14] E. M. Torlig, E. Alexiou, T. A. Fonseca, R. L. de Queiroz, and T. Ebrahimi,
“A novel methodology for quality assessment of voxelized point clouds,” in
Proc. Appl. Digital Image Process. XLI, vol. 10752, 2018, Art. no. 107520I.
Authorized licensed use limited to: University Roma Tre AREA SCIENTIFICO TECNOLOGICA. Downloaded on December 11,2023 at 04:20:27 UTC from IEEE Xplore. Restrictions apply.