ZTE Communications ›› 2023, Vol. 21 ›› Issue (4): 3-16.DOI: 10.12142/ZTECOM.202304002
• Special Topic • Previous Articles Next Articles
ZHOU Yingjie(), ZHANG Zicheng(), SUN Wei, MIN Xiongkuo, ZHAI Guangtao
Received:
2023-10-07
Online:
2023-12-07
Published:
2023-12-07
About author:
ZHOU Yingjie (zyj2000@sjtu.edu.cn) received his BE degree in electronics and information engineering from China University of Mining and Technology in 2023. He is currently pursuing a PhD degree at the Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China. His current research interests include 3D quality assessment and virtual digital human.ZHOU Yingjie, ZHANG Zicheng, SUN Wei, MIN Xiongkuo, ZHAI Guangtao. Perceptual Quality Assessment for Point Clouds : A Survey[J]. ZTE Communications, 2023, 21(4): 3-16.
Add to citation manager EndNote|Ris|BibTeX
URL: https://zte.magtechjournal.com/EN/10.12142/ZTECOM.202304002
Related Work | Display | Interaction | Methodology |
---|---|---|---|
Work of ALEXIOU et al.[ | 2D monitor | × | DSIS |
Work of ALEXIOU and EBRAHIMI[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of DA SILVA CRUZ et al.[ | 2D monitor | × | DSIS |
Work of SU et al.[ | 2D monitor | × | DSIS |
IRPC[ | 2D monitor | × | DSIS |
WPC[ | 2D monitor | × | DSIS |
SJTU-PCQA[ | 2D monitor | × | ACR |
VsenseVVDB2[ | 2D monitor | × | ACR |
Work of CAO et al.[ | 2D monitor | × | ACR |
Work of ALEXIOU and EBRAHIMI[ | 2D monitor | × | DSIS, ACR |
VsenseVVDB[ | 2D monitor | × | DSIS, PWC |
Work of ZHANG et al.[ | 2D monitor | × | - |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS |
LS-PCQA[ | 2D monitor | √ | DSIS |
Work of TORLIG et al.[ | 2D monitor | √ | DSIS |
M-PCCD[ | 2D monitor | √ | DSIS |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS, ACR |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS, ACR |
Work of VIOLA et al.[ | 2D monitor | - | DSIS |
NBU-PCD 1.0[ | 2D monitor | - | - |
ICIP2020[ | 2D/3D monitor | × | DSIS |
RG-PCD[ | 2D/3D monitor | × | DSIS |
Work of ALEXIOU et al.[ | AR | √ | DSIS |
Work of NEHMÉ et al.[ | HMD | × | DSIS, ACR |
PointXR[ | HMD | √ | DSIS |
SIAT-PCQD[ | HMD | √ | DSIS |
Work of SUBRAMANYAM et al.[ | HMD | √ | ACR |
Work of JESÚS GUTIÉRREZ et al.[ | HMD | √ | ACR |
Table 1 Summary of the experimental setups for subjective cloud quality assessment
Related Work | Display | Interaction | Methodology |
---|---|---|---|
Work of ALEXIOU et al.[ | 2D monitor | × | DSIS |
Work of ALEXIOU and EBRAHIMI[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of JAVAHERI et al.[ | 2D monitor | × | DSIS |
Work of DA SILVA CRUZ et al.[ | 2D monitor | × | DSIS |
Work of SU et al.[ | 2D monitor | × | DSIS |
IRPC[ | 2D monitor | × | DSIS |
WPC[ | 2D monitor | × | DSIS |
SJTU-PCQA[ | 2D monitor | × | ACR |
VsenseVVDB2[ | 2D monitor | × | ACR |
Work of CAO et al.[ | 2D monitor | × | ACR |
Work of ALEXIOU and EBRAHIMI[ | 2D monitor | × | DSIS, ACR |
VsenseVVDB[ | 2D monitor | × | DSIS, PWC |
Work of ZHANG et al.[ | 2D monitor | × | - |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS |
LS-PCQA[ | 2D monitor | √ | DSIS |
Work of TORLIG et al.[ | 2D monitor | √ | DSIS |
M-PCCD[ | 2D monitor | √ | DSIS |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS, ACR |
Work of ALEXIOU et al.[ | 2D monitor | √ | DSIS, ACR |
Work of VIOLA et al.[ | 2D monitor | - | DSIS |
NBU-PCD 1.0[ | 2D monitor | - | - |
ICIP2020[ | 2D/3D monitor | × | DSIS |
RG-PCD[ | 2D/3D monitor | × | DSIS |
Work of ALEXIOU et al.[ | AR | √ | DSIS |
Work of NEHMÉ et al.[ | HMD | × | DSIS, ACR |
PointXR[ | HMD | √ | DSIS |
SIAT-PCQD[ | HMD | √ | DSIS |
Work of SUBRAMANYAM et al.[ | HMD | √ | ACR |
Work of JESÚS GUTIÉRREZ et al.[ | HMD | √ | ACR |
Database | Year | Attribute | Models | Distortion Type |
---|---|---|---|---|
G-PCD[ | 2017 | Colorless | 40 | Octree, Gaussian noise |
RG-PCD[ | 2018 | Colorless | 24 | Octree |
VsenseVVDB[ | 2019 | Colored | 32 | VPCC |
M-PCCD[ | 2019 | Colored | 244 | GPCC, VPCC |
IRPC[ | 2020 | Colorless | 54 & 54 | GPCC, VPCC |
ICIP2020[ | 2020 | Colored | 90 | GPCC, VPCC |
PointXR[ | 2020 | Colored | 100 | GPCC |
NBU-PCD 1.0[ | 2020 | Colored | 160 | Octree |
VsenseVVDB2[ | 2020 | Colored | 164 | Draco+JPEG, GPCC, VPCC |
SJTU-PCQA[ | 2020 | Colored | 420 | Octree, downsampling, color and geometry noise |
SIAT-PCQD[ | 2021 | Colored | 340 | VPCC |
CPCD 2.0[ | 2021 | Colored | 360 | GPCC, VPCC, Gaussian noise |
WPC[ | 2021 | Colored | 740 | Gaussian noise, downsampling, GPCC, VPCC |
WPC2.0[ | 2021 | Colored | 400 | VPCC |
WPC3.0[ | 2022 | Colored | 350 | VPCC |
LS-PCQA[ | 2023 | Colored | 1 080 | Color and geometry noise, downsampling, GPCC, VPCC, etc. |
BASICS[ | 2023 | Colored | 1 494 | VPCC, GPCC, GeoCNN[ |
Table 2 An overview of subjective PCQA databases
Database | Year | Attribute | Models | Distortion Type |
---|---|---|---|---|
G-PCD[ | 2017 | Colorless | 40 | Octree, Gaussian noise |
RG-PCD[ | 2018 | Colorless | 24 | Octree |
VsenseVVDB[ | 2019 | Colored | 32 | VPCC |
M-PCCD[ | 2019 | Colored | 244 | GPCC, VPCC |
IRPC[ | 2020 | Colorless | 54 & 54 | GPCC, VPCC |
ICIP2020[ | 2020 | Colored | 90 | GPCC, VPCC |
PointXR[ | 2020 | Colored | 100 | GPCC |
NBU-PCD 1.0[ | 2020 | Colored | 160 | Octree |
VsenseVVDB2[ | 2020 | Colored | 164 | Draco+JPEG, GPCC, VPCC |
SJTU-PCQA[ | 2020 | Colored | 420 | Octree, downsampling, color and geometry noise |
SIAT-PCQD[ | 2021 | Colored | 340 | VPCC |
CPCD 2.0[ | 2021 | Colored | 360 | GPCC, VPCC, Gaussian noise |
WPC[ | 2021 | Colored | 740 | Gaussian noise, downsampling, GPCC, VPCC |
WPC2.0[ | 2021 | Colored | 400 | VPCC |
WPC3.0[ | 2022 | Colored | 350 | VPCC |
LS-PCQA[ | 2023 | Colored | 1 080 | Color and geometry noise, downsampling, GPCC, VPCC, etc. |
BASICS[ | 2023 | Colored | 1 494 | VPCC, GPCC, GeoCNN[ |
Method | Reference Type | Feature Extraction | Handcrafted/Deep Learning |
---|---|---|---|
p2point[ | FR | Model-based | Handcrafted |
p2plane[ | FR | Model-based | Handcrafted |
p2mesh[ | FR | Model-based | Handcrafted |
Plane to plane[ | FR | Model-based | Handcrafted |
PointSSIM[ | FR | Model-based | Handcrafted |
GraphSIM[ | FR | Model-based | Handcrafted |
MS-GraphSIM[ | FR | Model-based | Handcrafted |
PCQM[ | FR | Model-based | Handcrafted |
PC-MSDM[ | FR | Model-based | Handcrafted |
Proposed by VIOLA et al.[ | FR | Model-based | Handcrafted |
VQA-CPC[ | FR | Model-based | Handcrafted |
CPC-GSCT[ | FR | Model-based | Handcrafted |
Proposed by JAVAHERI et al.[ | FR | Model-based | Handcrafted |
Proposed by JAVAHERI et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
EPES[ | FR | Model-based | Handcrafted |
PSNRyuv[ | FR | Projection-based | Handcrafted |
Proposed by WU et al.[ | FR | Projection-based | Handcrafted |
Proposed by HE et al.[ | FR | Projection-based | Handcrafted |
PB-PCQA[ | FR | Projection-based | Handcrafted |
TGP-PCQA[ | FR | Projection-based | Handcrafted |
Proposed by TU et al.[ | FR | Model & projection | Handcrafted |
PCMRR[ | RR | Model-based | Handcrafted |
R-PCQA[ | RR | Model-based | Handcrafted |
RR-CAP[ | RR | Projection-based | Handcrafted |
3D-NSS[ | NR | Model-based | Handcrafted |
StreamPCQ[ | NR | Model-based | Handcrafted |
Proposed by ZHOU et al.[ | NR | Model-based | Handcrafted |
ResSCNN[ | NR | Model-based | Deep learning |
PKT-PCQA[ | NR | Model-based | Deep learning |
Proposed by TU et al.[ | NR | Projection-based | Deep learning |
GPA-Net[ | NR | Projection-based | Deep learning |
PQA-Net[ | NR | Projection-based | Deep learning |
GMS-3DQA[ | NR | Projection-based | Deep learning |
D3-PCQA[ | NR | Projection-based | Deep learning |
PM-BVQA[ | NR | Projection-based | Deep learning |
IT-PCQA[ | NR | Projection-based | Deep learning |
3D-CNN-PCQA[ | NR | Projection-based | Deep learning |
VQA-PC[ | NR | Projection-based | Deep learning |
BQE-CVP[ | NR | Model & projection | Handcrafted |
MM-PCQA[ | NR | Model & projection | Deep learning |
Table 3 Summary of objective cloud quality assessment methods
Method | Reference Type | Feature Extraction | Handcrafted/Deep Learning |
---|---|---|---|
p2point[ | FR | Model-based | Handcrafted |
p2plane[ | FR | Model-based | Handcrafted |
p2mesh[ | FR | Model-based | Handcrafted |
Plane to plane[ | FR | Model-based | Handcrafted |
PointSSIM[ | FR | Model-based | Handcrafted |
GraphSIM[ | FR | Model-based | Handcrafted |
MS-GraphSIM[ | FR | Model-based | Handcrafted |
PCQM[ | FR | Model-based | Handcrafted |
PC-MSDM[ | FR | Model-based | Handcrafted |
Proposed by VIOLA et al.[ | FR | Model-based | Handcrafted |
VQA-CPC[ | FR | Model-based | Handcrafted |
CPC-GSCT[ | FR | Model-based | Handcrafted |
Proposed by JAVAHERI et al.[ | FR | Model-based | Handcrafted |
Proposed by JAVAHERI et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
Proposed by DINIZ et al.[ | FR | Model-based | Handcrafted |
EPES[ | FR | Model-based | Handcrafted |
PSNRyuv[ | FR | Projection-based | Handcrafted |
Proposed by WU et al.[ | FR | Projection-based | Handcrafted |
Proposed by HE et al.[ | FR | Projection-based | Handcrafted |
PB-PCQA[ | FR | Projection-based | Handcrafted |
TGP-PCQA[ | FR | Projection-based | Handcrafted |
Proposed by TU et al.[ | FR | Model & projection | Handcrafted |
PCMRR[ | RR | Model-based | Handcrafted |
R-PCQA[ | RR | Model-based | Handcrafted |
RR-CAP[ | RR | Projection-based | Handcrafted |
3D-NSS[ | NR | Model-based | Handcrafted |
StreamPCQ[ | NR | Model-based | Handcrafted |
Proposed by ZHOU et al.[ | NR | Model-based | Handcrafted |
ResSCNN[ | NR | Model-based | Deep learning |
PKT-PCQA[ | NR | Model-based | Deep learning |
Proposed by TU et al.[ | NR | Projection-based | Deep learning |
GPA-Net[ | NR | Projection-based | Deep learning |
PQA-Net[ | NR | Projection-based | Deep learning |
GMS-3DQA[ | NR | Projection-based | Deep learning |
D3-PCQA[ | NR | Projection-based | Deep learning |
PM-BVQA[ | NR | Projection-based | Deep learning |
IT-PCQA[ | NR | Projection-based | Deep learning |
3D-CNN-PCQA[ | NR | Projection-based | Deep learning |
VQA-PC[ | NR | Projection-based | Deep learning |
BQE-CVP[ | NR | Model & projection | Handcrafted |
MM-PCQA[ | NR | Model & projection | Deep learning |
Reference | Type | Methods | IRPC | CPCD2.0 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
SRCC | PLCC | KRCC | RMSE | SRCC | PLCC | KRCC | RMSE | |||
FR | Model-based | p2pointHausdorff[ | 0.212 5 | 0.238 8 | 0.145 5 | 0.960 1 | 0.314 5 | 0.348 2 | 0.217 9 | 1.099 5 |
p2pointMSE[ | 0.328 1 | 0.335 7 | 0.214 6 | 0.931 3 | 0.549 1 | 0.678 4 | 0.414 2 | 0.861 7 | ||
p2planeHausdorff[ | 0.254 1 | 0.392 5 | 0.197 5 | 0.908 9 | 0.378 6 | 0.406 1 | 0.266 3 | 1.071 8 | ||
p2planeMSE[ | 0.256 4 | 0.429 6 | 0.195 7 | 0.892 8 | 0.569 2 | 0.691 4 | 0.438 5 | 0.847 4 | ||
ASMEAN[ | 0.112 3 | 0.156 9 | 0.066 9 | 0.976 4 | 0.404 4 | 0.437 6 | 0.275 2 | 1.054 6 | ||
ASRMS[ | 0.118 8 | 0.145 2 | 0.085 2 | 0.978 2 | 0.417 3 | 0.446 4 | 0.289 5 | 1.049 6 | ||
ASMSE[ | 0.118 8 | 0.153 6 | 0.085 2 | 0.990 2 | 0.417 3 | 0.447 2 | 0.289 5 | 1.049 1 | ||
PC-MSDM[ | 0.151 9 | 0.272 9 | 0.106 3 | 0.951 5 | 0.532 1 | 0.625 4 | 0.384 2 | 0.915 2 | ||
PCQM[ | 0381 9 | 0.561 1 | 0.303 3 | 0.818 4 | 0.340 8 | 0.481 3 | 0.261 5 | 1.028 1 | ||
CPC-GSCT[ | 0.862 6 | 0.870 6 | 0.689 4 | 0.482 9 | 0.906 3 | 0.904 9 | 0.745 1 | 0.502 7 | ||
Projection-based | PSNR* | 0.149 6 | 0.347 1 | 0.089 4 | 0.927 2 | 0.406 4 | 0.418 3 | 0.286 7 | 1.065 4 | |
SSIM*[ | 0.080 6 | 0.238 5 | 0.048 6 | 0.960 1 | 0.534 7 | 0.564 7 | 0.379 2 | 0.968 0 | ||
MS-SSIM*[ | 0.116 4 | 0.328 0 | 0.069 7 | 0.934 0 | 0.568 6 | 0.621 2 | 0.414 0 | 0.919 2 | ||
VIF*[ | 0.171 6 | 0.094 9 | 0.121 7 | 0.984 2 | 0.674 4 | 0.698 5 | 0.495 7 | 0.839 4 | ||
TGP-PCQA[ | 0.650 0 | 0.800 5 | 0.555 6 | 0.491 4 | 0.906 6 | 0.909 4 | 0.758 9 | 0.489 2 | ||
NR | Model-based & Projection-based | BQE-CVP[ |
Table 4 Performance comparison of different PCQA methods on IRPC and CPCD2.0. For FR and NR methods, the best performance of each metric is marked in bold and underlined bold respectively. The IQA and VQA methods are marked with * superscript
Reference | Type | Methods | IRPC | CPCD2.0 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
SRCC | PLCC | KRCC | RMSE | SRCC | PLCC | KRCC | RMSE | |||
FR | Model-based | p2pointHausdorff[ | 0.212 5 | 0.238 8 | 0.145 5 | 0.960 1 | 0.314 5 | 0.348 2 | 0.217 9 | 1.099 5 |
p2pointMSE[ | 0.328 1 | 0.335 7 | 0.214 6 | 0.931 3 | 0.549 1 | 0.678 4 | 0.414 2 | 0.861 7 | ||
p2planeHausdorff[ | 0.254 1 | 0.392 5 | 0.197 5 | 0.908 9 | 0.378 6 | 0.406 1 | 0.266 3 | 1.071 8 | ||
p2planeMSE[ | 0.256 4 | 0.429 6 | 0.195 7 | 0.892 8 | 0.569 2 | 0.691 4 | 0.438 5 | 0.847 4 | ||
ASMEAN[ | 0.112 3 | 0.156 9 | 0.066 9 | 0.976 4 | 0.404 4 | 0.437 6 | 0.275 2 | 1.054 6 | ||
ASRMS[ | 0.118 8 | 0.145 2 | 0.085 2 | 0.978 2 | 0.417 3 | 0.446 4 | 0.289 5 | 1.049 6 | ||
ASMSE[ | 0.118 8 | 0.153 6 | 0.085 2 | 0.990 2 | 0.417 3 | 0.447 2 | 0.289 5 | 1.049 1 | ||
PC-MSDM[ | 0.151 9 | 0.272 9 | 0.106 3 | 0.951 5 | 0.532 1 | 0.625 4 | 0.384 2 | 0.915 2 | ||
PCQM[ | 0381 9 | 0.561 1 | 0.303 3 | 0.818 4 | 0.340 8 | 0.481 3 | 0.261 5 | 1.028 1 | ||
CPC-GSCT[ | 0.862 6 | 0.870 6 | 0.689 4 | 0.482 9 | 0.906 3 | 0.904 9 | 0.745 1 | 0.502 7 | ||
Projection-based | PSNR* | 0.149 6 | 0.347 1 | 0.089 4 | 0.927 2 | 0.406 4 | 0.418 3 | 0.286 7 | 1.065 4 | |
SSIM*[ | 0.080 6 | 0.238 5 | 0.048 6 | 0.960 1 | 0.534 7 | 0.564 7 | 0.379 2 | 0.968 0 | ||
MS-SSIM*[ | 0.116 4 | 0.328 0 | 0.069 7 | 0.934 0 | 0.568 6 | 0.621 2 | 0.414 0 | 0.919 2 | ||
VIF*[ | 0.171 6 | 0.094 9 | 0.121 7 | 0.984 2 | 0.674 4 | 0.698 5 | 0.495 7 | 0.839 4 | ||
TGP-PCQA[ | 0.650 0 | 0.800 5 | 0.555 6 | 0.491 4 | 0.906 6 | 0.909 4 | 0.758 9 | 0.489 2 | ||
NR | Model-based & Projection-based | BQE-CVP[ |
Reference | Type | Method | SJTU-PCQA | WPC | ||||||
---|---|---|---|---|---|---|---|---|---|---|
SRCC | PLCC | KRCC | RMSE | SRCC | PLCC | KRCC | RMSE | |||
FR | Model-based | p2pointHausdorff[ | 0.43 | 0.16 | 0.30 | 2.39 | 0.27 | 0.39 | 0.19 | 20.89 |
p2pointMSE[ | 0.40 | 0.47 | 0.28 | 2.13 | 0.45 | 0.48 | 0.31 | 19.89 | ||
p2planeHausdorff[ | 0.46 | 0.37 | 0.33 | 2.44 | 0.28 | 0.27 | 0.16 | 21.98 | ||
p2planeMSE[ | 0.49 | 0.56 | 0.35 | 2.00 | 0.32 | 0.26 | 0.22 | 22.82 | ||
ASMEAN[ | 0.51 | 0.65 | 0.36 | 1.82 | - | - | - | - | ||
ASRMS[ | 0.52 | 0.65 | 0.37 | 1.82 | - | - | - | - | ||
ASMSE[ | 0.52 | 0.65 | 0.37 | 1.82 | - | - | - | - | ||
PC-MSDM[ | 0.32 | 0.41 | 0.21 | 2.21 | - | - | - | - | ||
PCQM[ | 0.74 | 0.77 | 0.56 | 1.52 | 0.74 | 0.74 | 0.56 | 15.16 | ||
GraphSIM[ | 0.84 | 0.84 | 0.64 | 1.57 | 0.58 | 0.61 | 0.41 | 17.19 | ||
PointSSIM[ | 0.68 | 0.71 | 0.49 | 1.70 | 0.45 | 0.46 | 0.32 | 20.27 | ||
CPC-GSCT[ | 0.89 | 0.91 | 0.71 | 0.99 | - | - | - | - | ||
Projection-based | PSNRyuv[ | - | - | - | - | 0.44 | 0.53 | 0.31 | 19.31 | |
PSNR* | 0.65 | 0.63 | 0.47 | 1.87 | 0.42 | 0.48 | 0.30 | 15.81 | ||
SSIM*[ | 0.55 | 0.56 | 0.39 | 1.99 | 0.38 | 0.49 | 0.32 | 15.77 | ||
MS-SSIM*[ | 0.72 | 0.74 | 0.52 | 1.62 | - | - | - | - | ||
VIF*[ | 0.74 | 0.78 | 0.54 | 1.49 | - | - | - | - | ||
PB-PCQA[ | 0.60 | 0.60 | - | 1.86 | - | - | - | - | ||
TGP-PCQA[ | 0.83 | 0.86 | 0.65 | 1.21 | - | - | - | - | ||
RR | Model-based | R-PCQA[ | - | - | - | - | 0.88 | 0.88 | - | - |
PCMRR[ | 0.48 | 0.61 | 0.33 | 1.93 | 0.30 | 0.34 | 0.20 | 21.53 | ||
Projection-based | RR-CAP[ | 0.75 | 0.76 | 0.55 | 1.55 | 0.71 | 0.73 | 0.52 | 15.64 | |
NR | Model-based | 3D-NSS[ | 0.71 | 0.73 | 0.51 | 1.76 | 0.64 | 0.65 | 0.44 | 16.57 |
Projection-based | BRISQUE*[ | 0.20 | 0.22 | 0.11 | 2.24 | 0.37 | 0.41 | 0.24 | 22.54 | |
NIQE*[ | 0.22 | 0.37 | 0.15 | 2.26 | 0.38 | 0.39 | 0.25 | 22.55 | ||
IL-NIQE*[ | 0.08 | 0.16 | 0.05 | 2.33 | 0.09 | 0.14 | 0.08 | 24.01 | ||
VIIDEO*[ | 0.05 | 0.29 | 0.04 | 2.31 | 0.07 | 0.08 | 0.05 | 22.92 | ||
V-BLIINDS*[ | 0.68 | 0.78 | 0.48 | 1.50 | 0.46 | 0.49 | 0.30 | 19.73 | ||
TLVQM*[ | 0.52 | 0.60 | 0.34 | 1.91 | 0.03 | 0.01 | 0.20 | 22.14 | ||
VIDEVAL*[ | 0.60 | 0.74 | 0.42 | 1.50 | 0.37 | 0.26 | 0.36 | 21.09 | ||
VSFA*[ | 0.72 | 0.82 | 0.54 | 1.40 | 0.63 | 0.63 | 0.46 | 17.23 | ||
RAPIQUE*[ | 0.44 | 0.40 | 0.34 | 2.21 | 0.27 | 0.35 | 0.20 | 21.14 | ||
StairVQA*[ | 0.79 | 0.78 | 0.55 | 1.42 | 0.72 | 0.71 | 0.52 | 15.07 | ||
PQA-Net[ | - | - | - | - | 0.69 | 0.70 | 0.51 | 15.18 | ||
3D-CNN-PCQA[ | 0.83 | 0.86 | 0.60 | 1.22 | 0.75 | 0.76 | 0.56 | 13.56 | ||
ResSCNN[ | 0.81 | 0.86 | - | - | - | - | - | - | ||
IT-PCQA[ | 0.63 | 0.58 | - | - | 0.54 | 0.55 | ||||
VQA-PC[ | 0.85 | 0.86 | 0.65 | 1.13 | 0.79 | 0.79 | 0.61 | 13.62 | ||
Model-based & projection-based | BQE-CVP[ | 0.89 | 0.91 | 0.73 | 0.97 | - | - | - | - | |
MM-PCQA[ |
Table 5 Performance comparison of different PCQA methods on SJTU-PCQA and WPC. For FR, RR, and NR methods, the best performance of each metric is marked in bold, bold italics, and underlined bold (vacant metrics are not counted in the comparison) respectively. The IQA and VQA methods are marked with * superscript
Reference | Type | Method | SJTU-PCQA | WPC | ||||||
---|---|---|---|---|---|---|---|---|---|---|
SRCC | PLCC | KRCC | RMSE | SRCC | PLCC | KRCC | RMSE | |||
FR | Model-based | p2pointHausdorff[ | 0.43 | 0.16 | 0.30 | 2.39 | 0.27 | 0.39 | 0.19 | 20.89 |
p2pointMSE[ | 0.40 | 0.47 | 0.28 | 2.13 | 0.45 | 0.48 | 0.31 | 19.89 | ||
p2planeHausdorff[ | 0.46 | 0.37 | 0.33 | 2.44 | 0.28 | 0.27 | 0.16 | 21.98 | ||
p2planeMSE[ | 0.49 | 0.56 | 0.35 | 2.00 | 0.32 | 0.26 | 0.22 | 22.82 | ||
ASMEAN[ | 0.51 | 0.65 | 0.36 | 1.82 | - | - | - | - | ||
ASRMS[ | 0.52 | 0.65 | 0.37 | 1.82 | - | - | - | - | ||
ASMSE[ | 0.52 | 0.65 | 0.37 | 1.82 | - | - | - | - | ||
PC-MSDM[ | 0.32 | 0.41 | 0.21 | 2.21 | - | - | - | - | ||
PCQM[ | 0.74 | 0.77 | 0.56 | 1.52 | 0.74 | 0.74 | 0.56 | 15.16 | ||
GraphSIM[ | 0.84 | 0.84 | 0.64 | 1.57 | 0.58 | 0.61 | 0.41 | 17.19 | ||
PointSSIM[ | 0.68 | 0.71 | 0.49 | 1.70 | 0.45 | 0.46 | 0.32 | 20.27 | ||
CPC-GSCT[ | 0.89 | 0.91 | 0.71 | 0.99 | - | - | - | - | ||
Projection-based | PSNRyuv[ | - | - | - | - | 0.44 | 0.53 | 0.31 | 19.31 | |
PSNR* | 0.65 | 0.63 | 0.47 | 1.87 | 0.42 | 0.48 | 0.30 | 15.81 | ||
SSIM*[ | 0.55 | 0.56 | 0.39 | 1.99 | 0.38 | 0.49 | 0.32 | 15.77 | ||
MS-SSIM*[ | 0.72 | 0.74 | 0.52 | 1.62 | - | - | - | - | ||
VIF*[ | 0.74 | 0.78 | 0.54 | 1.49 | - | - | - | - | ||
PB-PCQA[ | 0.60 | 0.60 | - | 1.86 | - | - | - | - | ||
TGP-PCQA[ | 0.83 | 0.86 | 0.65 | 1.21 | - | - | - | - | ||
RR | Model-based | R-PCQA[ | - | - | - | - | 0.88 | 0.88 | - | - |
PCMRR[ | 0.48 | 0.61 | 0.33 | 1.93 | 0.30 | 0.34 | 0.20 | 21.53 | ||
Projection-based | RR-CAP[ | 0.75 | 0.76 | 0.55 | 1.55 | 0.71 | 0.73 | 0.52 | 15.64 | |
NR | Model-based | 3D-NSS[ | 0.71 | 0.73 | 0.51 | 1.76 | 0.64 | 0.65 | 0.44 | 16.57 |
Projection-based | BRISQUE*[ | 0.20 | 0.22 | 0.11 | 2.24 | 0.37 | 0.41 | 0.24 | 22.54 | |
NIQE*[ | 0.22 | 0.37 | 0.15 | 2.26 | 0.38 | 0.39 | 0.25 | 22.55 | ||
IL-NIQE*[ | 0.08 | 0.16 | 0.05 | 2.33 | 0.09 | 0.14 | 0.08 | 24.01 | ||
VIIDEO*[ | 0.05 | 0.29 | 0.04 | 2.31 | 0.07 | 0.08 | 0.05 | 22.92 | ||
V-BLIINDS*[ | 0.68 | 0.78 | 0.48 | 1.50 | 0.46 | 0.49 | 0.30 | 19.73 | ||
TLVQM*[ | 0.52 | 0.60 | 0.34 | 1.91 | 0.03 | 0.01 | 0.20 | 22.14 | ||
VIDEVAL*[ | 0.60 | 0.74 | 0.42 | 1.50 | 0.37 | 0.26 | 0.36 | 21.09 | ||
VSFA*[ | 0.72 | 0.82 | 0.54 | 1.40 | 0.63 | 0.63 | 0.46 | 17.23 | ||
RAPIQUE*[ | 0.44 | 0.40 | 0.34 | 2.21 | 0.27 | 0.35 | 0.20 | 21.14 | ||
StairVQA*[ | 0.79 | 0.78 | 0.55 | 1.42 | 0.72 | 0.71 | 0.52 | 15.07 | ||
PQA-Net[ | - | - | - | - | 0.69 | 0.70 | 0.51 | 15.18 | ||
3D-CNN-PCQA[ | 0.83 | 0.86 | 0.60 | 1.22 | 0.75 | 0.76 | 0.56 | 13.56 | ||
ResSCNN[ | 0.81 | 0.86 | - | - | - | - | - | - | ||
IT-PCQA[ | 0.63 | 0.58 | - | - | 0.54 | 0.55 | ||||
VQA-PC[ | 0.85 | 0.86 | 0.65 | 1.13 | 0.79 | 0.79 | 0.61 | 13.62 | ||
Model-based & projection-based | BQE-CVP[ | 0.89 | 0.91 | 0.73 | 0.97 | - | - | - | - | |
MM-PCQA[ |
1 | CAO C, PREDA M, ZAHARIA T. 3D point cloud compression: a survey [C]//24th International Conference on 3D Web Technology. ACM, 2019: 1–9. DOI: 10.1145/3329714.3338130 |
2 | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 77–85. DOI: 10.1109/CVPR.2017.16 |
3 | GUO Y L, WANG H Y, HU Q Y, et al. Deep learning for 3D point clouds: a survey [J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 43(12): 4338–4364. DOI: 10.1109/TPAMI.2020.3005434 |
4 | LIU Y P, YANG Q, XU Y L, et al. Point cloud quality assessment: dataset construction and learning-based no-reference metric [J]. ACM transactions on multimedia computing, communications, and applications, 2023, 19(2s): No.80. DOI: 10.1145/3550274 |
5 | LIU Q, SU H L, DUANMU Z F, et al. Perceptual quality assessment of colored 3D point clouds [J]. IEEE transactions on visualization and computer graphics, 2023, 29(8): 3642–3655. DOI: 10.1109/TVCG.2022.3167151 |
6 | YANG Q, CHEN H, MA Z, et al. Predicting the perceptual quality of point cloud: a 3D-to-2D projection-based exploration [J]. IEEE transactions on multimedia, 2021, 23: 3877–3891. DOI: 10.1109/TMM.2020.3033117 |
7 | LAZZAROTTO D, TESTOLINA M, EBRAHIMI T. On the impact of spatial rendering on point cloud subjective visual quality assessment [C]//14th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2022: 1–6. DOI: 10.1109/QoMEX55416.2022.9900898 |
8 | ALEXIOU E, EBRAHIMI T. On the performance of metrics to predict quality in point cloud representations [C]//Proc. SPIE 10396, Applications of Digital Image Processing XL. SPIE, 2017: 282–297. DOI: 10.1117/12.2275142 |
9 | NEHMÉ Y, FARRUGIA J P, DUPONT F, et al. Comparison of subjective methods, with and without explicit reference, for quality assessment of 3D graphics [C]//ACM Symposium on Applied Perception. ACM, 2019: 1–9. DOI: 10.1145/3343036.3352493 |
10 | EBRAHIMI T, ALEXIOU E, FONSECA T A, et al. A novel methodology for quality assessment of voxelized point clouds [C]//Proc. Applications of Digital Image Processing XLI. SPIE, 2018. DOI: 10.1117/12.2322741 |
11 | ALEXIOU E, VIOLA I, BORGES T M, et al. A comprehensive study of the rate-distortion performance in MPEG point cloud compression [J]. APSIPA transactions on signal and information processing, 2019, 8(1): e27. DOI: 10.1017/atsip.2019.20 |
12 | JAVAHERI A, BRITES C, PEREIRA F, et al. Subjective and objective quality evaluation of 3D point cloud denoising algorithms [C]//2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2017: 1–6. DOI: 10.1109/ICMEW.2017.8026263 |
13 | ALEXIOU E, EBRAHIMI T. Impact of visualisation strategy for subjective quality assessment of point clouds [C]//2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2018: 1–6. DOI: 10.1109/ICMEW.2018.8551498 |
14 | ALEXIOU E, PINHEIRO A M G, DUARTE C, et al. Point cloud subjective evaluation methodology based on reconstructed surfaces [C]//Proc. SPIE 10752, Applications of Digital Image Processing XLI. SPIE, 2018, 10752: 160–173. DOI: 10.1117/12.2321518 |
15 | JAVAHERI A, BRITES C, PEREIRA F, et al. Subjective and objective quality evaluation of compressed point clouds [C]//IEEE 19th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2017: 1–6. DOI: 10.1109/MMSP.2017.8122239 |
16 | SILVA CRUZ L A DA, DUMIĆ E, ALEXIOU E, et al. Point cloud quality evaluation: towards a definition for test conditions [C]//2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019: 1–6. DOI: 10.1109/QoMEX.2019.8743258 |
17 | ZERMAN E, GAO P, OZCINAR C, et al. Subjective and objective quality assessment for volumetric video compression [J]. Electronic imaging, 2019, 31(10): 323–1. DOI: 10.2352/issn.2470-1173.2019.10.iqsp-323 |
18 | SU H L, DUANMU Z F, LIU W T, et al. Perceptual quality assessment of 3d point clouds [C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019: 3182–3186. DOI: 10.1109/ICIP.2019.8803298 |
19 | JAVAHERI A, BRITES C, PEREIRA F, et al. Point cloud rendering after coding: impacts on subjective and objective quality [J]. IEEE transactions on multimedia, 2021, 23: 4049–4064. DOI: 10.1109/TMM.2020.3037481 |
20 | ZERMAN E, OZCINAR C, GAO P, et al. Textured mesh vs coloured point cloud: a subjective study for volumetric video compression [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123137 |
21 | CAO K M, XU Y, COSMAN P. Visual quality of compressed mesh and point cloud sequences [J]. IEEE access, 2020, 8: 171203–171217. DOI: 10.1109/ACCESS.2020.3024633 |
22 | ALEXIOU E, EBRAHIMI T. On subjective and objective quality evaluation of point cloud geometry [C]//Ninth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2017: 1–3. DOI: 10.1109/QoMEX.2017.7965681 |
23 | ALEXIOU E, EBRAHIMI T. Point cloud quality assessment metric based on angular similarity [C]//2018 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2018: 1–6. DOI: 10.1109/ICME.2018.8486512 |
24 | ALEXIOU E, EBRAHIMI T. Benchmarking of objective quality metrics for colorless point clouds [C]//2018 Picture Coding Symposium (PCS). IEEE, 2018: 51–55. DOI: 10.1109/PCS.2018.8456252 |
25 | ALEXIOU E, EBRAHIMI T. Exploiting user interactivity in quality assessment of point cloud imaging [C]//Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019: 1–6. DOI: 10.1109/QoMEX.2019.8743277 |
26 | VIOLA I, SUBRAMANYAM S, CESAR P. A color-based objective quality metric for point cloud contents [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123089 |
27 | JAVAHERI A, BRITES C, PEREIRA F, et al. Improving PSNR-based quality metrics performance for point cloud geometry [C]//2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020: 3438–3442. DOI: 10.1109/ICIP40778.2020.9191233 |
28 | HUA L, YU M, JIANG G Y, et al. VQA-CPC: a novel visual quality assessment metric of color point clouds [C]//Proc. SPIE 11550, Optoelectronic Imaging and Multimedia Technology VII. SPIE, 2020, 11550: 244–252. DOI: 10.1117/12.2573686 |
29 | ALEXIOU E, UPENIK E, EBRAHIMI T. Towards subjective quality assessment of point cloud imaging in augmented reality [C]//IEEE 19th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2017: 1–6. DOI: 10.1109/MMSP.2017.8122237 |
30 | ALEXIOU E, EBRAHIMI T, BERNARDO M V, et al. Point cloud subjective evaluation methodology based on 2D rendering [C]//Tenth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018: 1–6. DOI: 10.1109/QoMEX.2018.8463406 |
31 | PERRY S, CONG H P, SILVA CRUZ L A DA, et al. Quality evaluation of static point clouds encoded using MPEG codecs [C]//2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020: 3428–3432. DOI: 10.1109/ICIP40778.2020.9191308 |
32 | SOLIMINI A G. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness [J]. PLoS one, 2013, 8(2): e56160. DOI: 10.1371/journal.pone.0056160 |
33 | SHARPLES S, COBB S, MOODY A, et al. Virtual reality induced symptoms and effects (VRISE): comparison of head mounted display (HMD), desktop and projection display systems [J]. Displays, 2008, 29(2): 58–69. DOI: 10.1016/j.displa.2007.09.005 |
34 | SUBRAMANYAM S, LI J, VIOLA I, et al. Comparing the quality of highly realistic digital humans in 3DoF and 6DoF: a volumetric video case study [C]//2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2020: 127–136. DOI: 10.1109/VR46266.2020.00031 |
35 | ALEXIOU E, YANG N Y, EBRAHIMI T. PointXR: a toolbox for visualization and subjective evaluation of point clouds in virtual reality [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123121 |
36 | WU X J, ZHANG Y, FAN C L, et al. Subjective quality database and objective study of compressed point clouds with 6DoF head-mounted display [J]. IEEE transactions on circuits and systems for video technology, 2021, 31(12): 4630–4644. DOI: 10.1109/TCSVT.2021.3101484 |
37 | ZHANG J, HUANG W B, ZHU X Q, et al. A subjective quality evaluation for 3D point cloud models [C]//2014 International Conference on Audio, Language and Image Processing. IEEE, 2015: 827–831. DOI: 10.1109/ICALIP.2014.7009910 |
38 | GUTIÉRREZ J, VIGIER T, LE CALLET P. Quality evaluation of 3D objects in mixed reality for different lighting conditions [J]. Electronic imaging, 2020, 32(11): no. 128. DOI: 10.2352/issn.2470-1173.2020.11.hvei-128 |
39 | TURK G, LEVOY M. Zippered polygon meshes from range images [C]//21st Annual Conference on Computer Graphics and Interactive Techniques. ACM, 1994: 311–318. DOI: 10.1145/192161.192241 |
40 | MPEG-PCC. MPEG point cloud datasets [DB/OL]. (2017-01-15)[2018-05-26]. |
41 | JPEG. JPEG pleno database [DB/OL]. (2016-11-04)[2018-04-12]. |
42 | HUA L, YU M, HE Z Y, et al. CPC-GSCT: visual quality assessment for coloured point cloud based on geometric segmentation and colour transformation [J]. IET image processing, 2022, 16(4): 1083–1095. DOI: 10.1049/ipr2.12211 |
43 | AK A, ZERMAN E, QUACH M, et al. BASICS: broad quality assessment of static point clouds in compression scenarios [EB/OL]. (2023-02-09)[2023-05-06]. |
44 | LIU Q, YUAN H, HAMZAOUI R, et al. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression [J]. IEEE transactions on image processing, 2021, 30: 6623–6636. DOI: 10.1109/TIP.2021.3096060 |
45 | LIU Q, SU H L, CHEN T X, et al. No-reference bitstream-layer model for perceptual quality assessment of V-PCC encoded point clouds [J]. IEEE transactions on multimedia, 2023, 25: 4533–4546. DOI: 10.1109/TMM.2022.3177926 |
46 | QUACH M, VALENZISE G, DUFAUX F. Improved deep point cloud geometry compression [C]//IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2020: 1–6. DOI: 10.1109/MMSP48831.2020.9287077 |
47 | TIAN D, OCHIMIZU H, FENG C, et al. Geometric distortion metrics for point cloud compression [C]//2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2018: 3460–3464. DOI: 10.1109/ICIP.2017.8296925 |
48 | MEKURIA R, BLOM K, Design CESAR P., implementation, and evaluation of a point cloud codec for tele-immersive video [J]. IEEE transactions on circuits and systems for video technology, 2017, 27(4): 828–842. DOI: 10.1109/TCSVT.2016.2543039 |
49 | MEKURIA R, LI Z, TULVAN C, et al. Evaluation criteria for PCC (point cloud compression): MPEG: MPEG-I 2016/n16332 [S]. 2016 |
50 | MEYNET G, DIGNE J, LAVOUÉ G. et al: A quality metric for 3D point clouds [C]//Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019: 1–3. DOI: 10.1109/QoMEX.2019.8743313 |
51 | JAVAHERI A, BRITES C, PEREIRA F, et al. A generalized Hausdorff distance based quality metric for point cloud geometry [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123087 |
52 | ALEXIOU E, EBRAHIMI T. Towards a point cloud structural similarity metric [C]//2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2020: 1–6. DOI: 10.1109/ICMEW46912.2020.9106005 |
53 | MEYNET G, NEHMÉ Y, DIGNE J, et al. PCQM: A full-reference quality metric for colored 3D point clouds [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123147 |
54 | JAVAHERI A, BRITES C, PEREIRA F, et al. Mahalanobis based point to distribution metric for point cloud geometry quality evaluation [J]. IEEE signal processing letters, 2020, 27: 1350–1354. DOI: 10.1109/LSP.2020.3010128 |
55 | YANG Q, MA Z, XU Y L, et al. Inferring point cloud quality via graph similarity [J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 44(6): 3015–3029. DOI: 10.1109/TPAMI.2020.3047083 |
56 | DINIZ R, FREITAS P G, FARIAS M C Q. Multi-distance point cloud quality assessment [C]//2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020: 3443–3447. DOI: 10.1109/ICIP40778.2020.9190956 |
57 | DINIZ R, FREITAS P G, FARIAS M C Q. Towards a point cloud quality assessment model using local binary patterns [C]//Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020: 1–6. DOI: 10.1109/QoMEX48832.2020.9123076 |
58 | DINIZ R, FREITAS P G, FARIAS M. A novel point cloud quality assessment metric based on perceptual color distance patterns [C]//IS&T International Symposium on Electronic Imaging Science and Technology 2021, Image Quality and System Performance XVIII. IS&T, 2021. DOI: 10.2352/issn.2470-1173.2021.9.iqsp-256 |
59 | DINIZ R, FREITAS P G, FARIAS M C Q. Local luminance patterns for point cloud quality assessment [C]//IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2020: 1–6. DOI: 10.1109/MMSP48831.2020.9287154 |
60 | DINIZ R, FREITAS P G, FARIAS M C Q. Color and geometry texture descriptors for point-cloud quality assessment [J]. IEEE signal processing letters, 2021, 28: 1150–1154. DOI: 10.1109/LSP.2021.3088059 |
61 | HUA L, JIANG G Y, YU M, et al. BQE-CVP: blind quality evaluator for colored point cloud based on visual perception [C]//2021 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). IEEE, 2021: 1–6. DOI: 10.1109/BMSB53066.2021.9547070 |
62 | XU Y L, YANG Q, YANG L, et al. EPES: point cloud quality modeling using elastic potential energy similarity [J]. IEEE transactions on broadcasting, 2022, 68(1): 33–42. DOI: 10.1109/TBC.2021.3114510 |
63 | ZHANG Y J, YANG Q, XU Y L. MS-GraphSIM: inferring point cloud quality via multiscale graph similarity [C]//29th ACM International Conference on Multimedia. ACM, 2021: 1230–1238. DOI: 10.1145/3474085.3475294 |
64 | ZHANG Z C, SUN W, MIN X K, et al. No-reference quality assessment for 3D colored point cloud and mesh models [J]. IEEE transactions on circuits and systems for video technology, 2022, 32(11): 7618–7631. DOI: 10.1109/TCSVT.2022.3186894 |
65 | HE Z Y, JIANG G Y, JIANG Z D, et al. Towards a colored point cloud quality assessment method using colored texture and curvature projection [C]//2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021: 1444–1448. DOI: 10.1109/ICIP42928.2021.9506762 |
66 | TAO W X, JIANG G Y, JIANG Z D, et al. Point cloud projection and multi-scale feature fusion network based blind quality assessment for colored point clouds [C]//29th ACM International Conference on Multimedia. ACM, 2021: 5266–5272. DOI: 10.1145/3474085.3475645 |
67 | LIU Q, YUAN H, SU H L, et al. PQA-net: Deep no reference point cloud quality assessment via multi-view projection [J]. IEEE transactions on circuits and systems for video technology, 2021, 31(12): 4645–4660. DOI: 10.1109/TCSVT.2021.3100282 |
68 | CIGNONI P, ROCCHINI C, SCOPIGNO R. Metro: measuring error on simplified surfaces [J]. Computer graphics forum, 1998, 17(2): 167–174. DOI: 10.1111/1467-8659.00236 |
69 | TIAN D, OCHIMIZU H, FENG C, et al. Evaluation metrics for point cloud compression: ISO/IEC m74008 [S]. 2017, |
70 | HE Z Y, JIANG G Y, YU M, et al. TGP-PCQA: texture and geometry projection based quality assessment for colored point clouds [J]. Journal of visual communication and image representation, 2022, 83: 103449. DOI: 10.1016/j.jvcir.2022.103449 |
71 | TU R W, JIANG G Y, YU M, et al. Pseudo-reference point cloud quality measurement based on joint 2-D and 3-D distortion description [J]. IEEE transactions on instrumentation and measurement, 2023, 72: No.5019314. DOI: 10.1109/TIM.2023.3290291 |
72 | VIOLA I, CESAR P. A reduced reference metric for visual quality evaluation of point cloud contents [J]. IEEE signal processing letters, 2020, 27: 1660–1664. DOI: 10.1109/LSP.2020.3024065 |
73 | LIU Y P, YANG Q, XU Y L. Reduced reference quality assessment for point cloud compression [C]//2022 IEEE International Conference on Visual Communications and Image Processing (VCIP). IEEE, 2023: 1–5. DOI: 10.1109/VCIP56404.2022.10008813 |
74 | ZHOU W, YUE G H, ZHANG R Z, et al. Reduced-reference quality assessment of point clouds via content-oriented saliency projection [J]. IEEE signal processing letters, 2023, 30: 354–358. DOI: 10.1109/LSP.2023.3264105 |
75 | SU H L, LIU Q, LIU Y X, et al. Bitstream-based perceptual quality assessment of compressed 3D point clouds [J]. IEEE transactions on image processing, 2023, 32: 1815–1828. DOI: 10.1109/TIP.2023.3253252 |
76 | ZHOU W, YANG Q, JIANG Q P, et al. Blind quality assessment of 3D dense point clouds with structure guided resampling [EB/OL]. (2022-08-31)[2022-09-05]. |
77 | LIU Q, LIU Y Y, SU H L, et al. Progressive knowledge transfer based on human visual perception mechanism for perceptual quality assessment of point clouds [EB/OL]. (2022-11-30)[2022-12-04]. |
78 | TU R W, JIANG G Y, YU M, et al. V-PCC projection based blind point cloud quality assessment for compression distortion [J]. IEEE transactions on emerging topics in computational intelligence, 2023, 7(2): 462–473. DOI: 10.1109/TETCI.2022.3201619 |
79 | SHAN Z Y, YANG Q, YE R, et al. GPA-net: no-reference point cloud quality assessment with multi-task graph convolutional network [J]. IEEE transactions on visualization and computer graphics, 2802, 99: 1–13. DOI: 10.1109/TVCG.2023.3282802 |
80 | ZHANG Z C, SUN W, WU H N, et al. GMS-3DQA: projection-based grid mini-patch sampling for 3D model quality assessment [EB/OL]. (2023-06-09)[2023-07-03]. |
81 | LIU Y, YANG Q, ZHANG Y, et al. Once-training-all-fine: no-reference point cloud quality assessment via domain-relevance degradation description [EB/OL]. (2023-07-04)[2023-07-12]. |
82 | YANG Q, LIU Y P, CHEN S H, et al. No-reference point cloud quality assessment via domain adaptation [C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022: 21147–21156. DOI: 10.1109/CVPR52688.2022.02050 |
83 | FAN Y, ZHANG Z C, SUN W, et al. A No-reference quality assessment metric for point cloud based on captured video sequences [C]//24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022: 1–5. DOI: 10.1109/MMSP55362.2022.9949359 |
84 | ZHANG Z C, SUN W, ZHU Y C, et al. Treating point cloud as moving camera videos: a no-reference quality assessment metric [EB/OL]. (2022-09-11)[2022-10-12]. |
85 | ZHANG Z C, SUN W, MIN X K, et al. MM-PCQA: multi-modal learning for no-reference point cloud quality assessment [EB/OL]. (2022-09-01)[2022-09-27]. |
86 | LAVOUÉ G, GELASCAE D, DUPONTF, et al. Perceptually driven 3D distance metrics with application to watermarking [C]//Proc. SPIE 6312, Applications of Digital Image Processing XXIX, SPIE. 2006, 6312: 150–161. DOI: 10.1117/12.686964 |
87 | LAVOUÉ G. A multiscale metric for 3D mesh visual quality assessment [J]. Computer graphics forum, 2011, 30(5): 1427–1437. DOI: 10.1111/j.1467-8659.2011.02017.x |
88 | WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity [J]. IEEE transactions on image processing, 2004, 13(4): 600–612. DOI: 10.1109/TIP.2003.819861 |
89 | WANG Z, BOVIK A C. Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures [J]. IEEE signal processing magazine, 2009, 26(1): 98–117. DOI: 10.1109/MSP.2008.930649 |
90 | SUN W, MIN X K, TU D Y, et al. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training [J]. IEEE journal of selected topics in signal processing, 2023, PP(99): 1–15. DOI: 10.1109/JSTSP.2023.3270621 |
91 | MITTAL A, MOORTHY A K, BOVIK A C. No-reference image quality assessment in the spatial domain [J]. IEEE transactions on image processing, 2012, 21(12): 4695–4708. DOI: 10.1109/TIP.2012.2214050 |
92 | NARVEKAR N D, KARAM L J. A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection [C]//2009 International Workshop on Quality of Multimedia Experience. IEEE, 2009: 87–91. DOI: 10.1109/QOMEX.2009.5246972 |
93 | ZHANG L, ZHANG L, BOVIK A C. A feature-enriched completely blind image quality evaluator [J]. IEEE transactions on image processing, 2015, 24(8): 2579–2591. DOI: 10.1109/TIP.2015.2426416 |
94 | GU K, ZHAI G T, YANG X K, et al. Using free energy principle for blind image quality assessment [J]. IEEE transactions on multimedia, 2015, 17(1): 50–63. DOI: 10.1109/TMM.2014.2373812 |
95 | GU K, ZHAI G T, YANG X K, et al. No-reference image quality assessment metric by combining free energy theory and structural degradation model [C]//2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2013: 1–6. DOI: 10.1109/ICME.2013.6607462 |
96 | MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer [J]. IEEE signal processing letters, 2013, 20(3): 209–212. DOI: 10.1109/LSP.2012.2227726 |
97 | ZHANG W X, MA K D, YAN J, et al. Blind image quality assessment using a deep bilinear convolutional neural network [J]. IEEE transactions on circuits and systems for video technology, 2020, 30(1): 36–47. DOI: 10.1109/TCSVT.2018.2886771 |
98 | HARA K, KATAOKA H, SATOH Y. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet? [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018: 6546–6555. DOI: 10.1109/CVPR.2018.0068 |
99 | VQEG. Final report from the video quality experts group on the validation of objective models of video quality assessment: PHASE II [R]. 2003 |
100 | WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment [C]//Thrity-Seventh Asilomar Conference on Signals, Systems & Computers. IEEE, 2004: 1398–1402. DOI: 10.1109/ACSSC.2003.1292216 |
101 | SHEIKH H R, BOVIK A C. Image information and visual quality [J]. IEEE transactions on image processing, 2006, 15(2): 430–444. DOI: 10.1109/TIP.2005.859378 |
102 | MITTAL A, SAAD M A, BOVIK A C. A completely blind video integrity oracle [J]. IEEE transactions on image processing. IEEE, 2016, 25(1): 289–300. DOI: 10.1109/TIP.2015.2502725 |
103 | SAAD M A, BOVIK A C, CHARRIER C. Blind prediction of natural video quality [J]. IEEE transactions on image processing, 2014, 23(3): 1352–1365. DOI: 10.1109/TIP.2014.2299154 |
104 | KORHONEN J. Two-level approach for no-reference consumer video quality assessment [J]. IEEE transactions on image processing, 2019, 28(12): 5923–5938. DOI: 10.1109/TIP.2019.2923051 |
105 | TU Z Z, WANG Y L, BIRKBECK N, et al. UGC-VQA: benchmarking blind video quality assessment for user generated content [J]. IEEE transactions on image processing. IEEE, 2021, 30: 4449–4464. DOI: 10.1109/TIP.2021.3072221 |
106 | LI D Q, JIANG T T, JIANG M. Quality assessment of in-the-wild videos [C]//27th ACM International Conference on Multimedia. New York: ACM, 2019: 2351–2359. DOI: 10.1145/3343031.3351028 |
107 | TU Z Z, YU X X, WANG Y L, et al. RAPIQUE: rapid and accurate video quality prediction of user generated content [J]. IEEE open journal of signal processing, 2021, 2: 425–440. DOI: 10.1109/OJSP.2021.3090333 |
108 | SUN W, WANG T, MIN X K, et al. Deep learning based full-reference and no-reference quality assessment models for compressed UGC videos [C]//2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2021: 1–6. DOI: 10.1109/ICMEW53276.2021.9455999 |
No related articles found! |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||