Abstract

Efficient and quick extraction of unknown objects in cluttered 3D scenes plays a significant role in robotics tasks such as object search, grasping, and manipulation. This paper describes a geometric-based unsupervised approach for the segmentation of cluttered scenes into objects. The proposed method first over-segments the raw point clouds into supervoxels to provide a more natural representation of 3D point clouds and reduce the computational cost with a minimal loss of geometric information. Then the fully connected local area linkage graph is used to distinguish between planar and nonplanar adjacent patches. Then the initial segmentation is completed utilizing the geometric features and local surface convexities. After the initial segmentation, many subgraphs are generated, each of which represents an individual object or part of it. Finally, we use the plane extracted from the scene to refine the initial segmentation result under the framework of global energy optimization. Experiments on the Object Cluttered Indoor Dataset dataset indicate that the proposed method can outperform the representative segmentation algorithms in terms of weighted overlap and accuracy, while our method has good robustness and real-time performance.

© 2020 Optical Society of America

Full Article  |  PDF Article
More Like This
Fast and precise 6D pose estimation of textureless objects using the point cloud and gray image

Wang Pan, Feng Zhu, Yingming Hao, and Limin Zhang
Appl. Opt. 57(28) 8154-8165 (2018)

Hyperspectral lidar point cloud segmentation based on geometric and spectral information

Biwu Chen, Shuo Shi, Jia Sun, Wei Gong, Jian Yang, Lin Du, Kuanghui Guo, Binhui Wang, and Bowen Chen
Opt. Express 27(17) 24043-24059 (2019)

Simplification method for 3D Terracotta Warrior fragments based on local structure and deep neural networks

Guohua Geng, Jie Liu, Xin Cao, Yangyang Liu, Wei Zhou, Fengjun Zhao, Linzhi Su, Kang Li, and Mingquan Zhou
J. Opt. Soc. Am. A 37(11) 1711-1720 (2020)

References

  • View by:
  • |
  • |
  • |

  1. M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.
  2. A. Nguyen and B. Le, “3D point cloud segmentation: a survey,” in 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2013), pp. 225–230.
  3. S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.
  4. Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.
  5. T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).
  6. M. Wang and Y. H. Tseng, “Incremental segmentation of lidar point clouds with an octree-structured voxel space,” Photogramm. Rec. 26, 32–57 (2011).
    [Crossref]
  7. C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.
  8. Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).
  9. L. Shao, Y. Tian, and J. Bohg, “ClusterNet: 3D instance segmentation in RGB-D images,” arXiv:1807.08894 (2018).
  10. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.
  11. K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.
  12. M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.
  13. A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).
  14. K. L. Boyer and S. Sarkar, “Guest editors’ introduction: perceptual organization in computer vision: status, challenges, and potential,” in Computer Vision and Image Understanding (Academic, 1999), vol. 76, pp. 1–5.
  15. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vis. 59, 167–181 (2004).
    [Crossref]
  16. C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
    [Crossref]
  17. S. Vicente, V. Kolmogorov, and C. Rother, “Joint optimization of segmentation and appearance models,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 755–762.
  18. M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.
  19. E. Strekalovskiy and D. Cremers, “Real-time minimization of the piecewise smooth Mumford-Shah functional,” in European Conference on Computer Vision (ECCV) (2014), pp. 127–141.
  20. G. Kootstra, N. Bergström, and D. Kragic, “Fast and automatic detection and segmentation of unknown objects,” in 10th IEEE-RAS International Conference on Humanoid Robots (2010), pp. 442–447.
  21. A. Ückermann, R. Haschke, and H. Ritter, “Real-time 3D segmentation of cluttered scenes for robot grasping,” in 12th IEEE-RAS International Conference on Humanoid Robots (2012), pp. 198–203.
  22. A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.
  23. T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.
  24. Y. Xie, T. Jiaojiao, and X. Zhu, “Linking points with labels in 3D: a review of point cloud semantic segmentation,” in IEEE Geoscience and Remote Sensing Magazine (2020).
    [Crossref]
  25. A. Golovinskiy and T. Funkhouser, “Min-cut based segmentation of point clouds,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 39–46.
  26. Y. Boykov and G. Funka-Lea, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70, 109–131 (2006).
    [Crossref]
  27. S. Ural and J. Shan, “Min-cut based segmentation of airborne LiDAR point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XXXIX-B3, 167–172 (2012).
    [Crossref]
  28. A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
    [Crossref]
  29. M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.
  30. R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.
  31. J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.
  32. W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
    [Crossref]
  33. F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.
  34. Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
    [Crossref]
  35. S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.
  36. Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
    [Crossref]
  37. A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
    [Crossref]
  38. T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
    [Crossref]
  39. G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
    [Crossref]
  40. C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
    [Crossref]
  41. T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.
  42. L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 4558–4567.
  43. S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 769–776.
  44. U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
    [Crossref]
  45. S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
    [Crossref]
  46. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.
  47. A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.
  48. D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.
  49. D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.
  50. P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to segment object candidates,” in Advances in Neural Information Processing Systems (2015), pp. 1990–1998.
  51. P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.
  52. B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
    [Crossref]
  53. A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR) (2006), pp. 850–855.
  54. H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. Comput. Vis. 97, 123–147 (2012).
    [Crossref]
  55. A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
    [Crossref]
  56. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
    [Crossref]
  57. M. R. Loghmani, B. Caputo, and M. Vincze, “Recognizing objects in-the-wild: where do we stand?” in International Conference on Robotics and Automation (ICRA) (2018), pp. 2170–2177.
  58. B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
    [Crossref]
  59. R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
    [Crossref]
  60. F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.
  61. T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.
  62. E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.
  63. A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
    [Crossref]
  64. M. Awrangjeb and C. S. Fraser, “Automatic segmentation of raw LiDAR data for extraction of building roofs,” Remote Sens. 6, 3716–3751 (2014).
    [Crossref]
  65. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.
  66. S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5556–5565.

2020 (1)

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

2019 (1)

W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
[Crossref]

2018 (3)

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

2017 (4)

G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
[Crossref]

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
[Crossref]

2016 (1)

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

2015 (2)

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

2014 (2)

M. Awrangjeb and C. S. Fraser, “Automatic segmentation of raw LiDAR data for extraction of building roofs,” Remote Sens. 6, 3716–3751 (2014).
[Crossref]

A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
[Crossref]

2012 (3)

S. Ural and J. Shan, “Min-cut based segmentation of airborne LiDAR point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XXXIX-B3, 167–172 (2012).
[Crossref]

H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. Comput. Vis. 97, 123–147 (2012).
[Crossref]

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

2011 (1)

M. Wang and Y. H. Tseng, “Incremental segmentation of lidar point clouds with an octree-structured voxel space,” Photogramm. Rec. 26, 32–57 (2011).
[Crossref]

2007 (1)

R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
[Crossref]

2006 (2)

T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).

Y. Boykov and G. Funka-Lea, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70, 109–131 (2006).
[Crossref]

2004 (2)

P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vis. 59, 167–181 (2004).
[Crossref]

C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
[Crossref]

2001 (1)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
[Crossref]

Abbeel, P.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Abramov, A.

J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.

Albanie, S.

D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.

Aldoma, A.

A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).

Ao, W.

W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
[Crossref]

Asif, U.

U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
[Crossref]

Avraham, T.

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

Awrangjeb, M.

M. Awrangjeb and C. S. Fraser, “Automatic segmentation of raw LiDAR data for extraction of building roofs,” Remote Sens. 6, 3716–3751 (2014).
[Crossref]

Baykan, N. A.

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

Baykan, Ö. K.

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

Beetz, M.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

Bennamoun, M.

U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
[Crossref]

Ben-Shabat, Y.

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

Bergström, N.

G. Kootstra, N. Bergström, and D. Kragic, “Fast and automatic detection and segmentation of unknown objects,” in 10th IEEE-RAS International Conference on Humanoid Robots (2010), pp. 442–447.

Bertolotto, M.

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

Bischof, H.

M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.

Björkman, M.

M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.

Blake, A.

C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
[Crossref]

Blodow, N.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

Bo, L.

K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.

Bohg, J.

L. Shao, Y. Tian, and J. Bohg, “ClusterNet: 3D instance segmentation in RGB-D images,” arXiv:1807.08894 (2018).

M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.

Boyer, K. L.

K. L. Boyer and S. Sarkar, “Guest editors’ introduction: perceptual organization in computer vision: status, challenges, and potential,” in Computer Vision and Image Understanding (Academic, 1999), vol. 76, pp. 1–5.

Boykov, Y.

H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. Comput. Vis. 97, 123–147 (2012).
[Crossref]

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

Y. Boykov and G. Funka-Lea, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70, 109–131 (2006).
[Crossref]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
[Crossref]

Brabandere, B. D.

D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.

Bruce, J.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Calli, B.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Caputo, B.

M. R. Loghmani, B. Caputo, and M. Vincze, “Recognizing objects in-the-wild: where do we stand?” in International Conference on Robotics and Automation (ICRA) (2018), pp. 2170–2177.

Choi, S.

S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5556–5565.

Christoph Stein, S.

S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.

Coenen, M.

G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
[Crossref]

Collobert, R.

P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to segment object candidates,” in Advances in Neural Information Processing Systems (2015), pp. 1990–1998.

P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.

Cremers, D.

E. Strekalovskiy and D. Cremers, “Real-time minimization of the piecewise smooth Mumford-Shah functional,” in European Conference on Computer Vision (ECCV) (2014), pp. 127–141.

Czerniawski, T.

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Dai, W.

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

Danielczuk, M.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Darrell, T.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.

Delong, A.

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

Do, T. T.

T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.

Dollar, A. M.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Dollár, P.

P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.

P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to segment object candidates,” in Advances in Neural Information Processing Systems (2015), pp. 1990–1998.

Dong, Z.

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

Dutta, A.

A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
[Crossref]

Engels, J.

A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
[Crossref]

Erskine, J.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Felzenszwalb, P. F.

P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vis. 59, 167–181 (2004).
[Crossref]

Fergus, R.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

Fischer, A.

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

Fischinger, D.

M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.

Fox, D.

K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.

C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.

Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).

Fraser, C. S.

M. Awrangjeb and C. S. Fraser, “Automatic segmentation of raw LiDAR data for extraction of building roofs,” Remote Sens. 6, 3716–3751 (2014).
[Crossref]

Funka-Lea, G.

Y. Boykov and G. Funka-Lea, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70, 109–131 (2006).
[Crossref]

Funkhouser, T.

A. Golovinskiy and T. Funkhouser, “Min-cut based segmentation of point clouds,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 39–46.

Goldberg, K.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Golovinskiy, A.

A. Golovinskiy and T. Funkhouser, “Min-cut based segmentation of point clouds,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 39–46.

Gool, L. V.

D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.

Gould, S.

T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.

Greenspan, M.

Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.

Grinover, R.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Gupta, S.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Gurman, A.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Haas, C.

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Habib, A.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Hahn, M.

A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
[Crossref]

Harrap, R.

Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.

Haschke, R.

A. Ückermann, R. Haschke, and H. Ritter, “Real-time 3D segmentation of cluttered scenes for robot grasping,” in 12th IEEE-RAS International Conference on Humanoid Robots (2012), pp. 198–203.

Heo, J.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Hoegner, L.

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

Hoiem, D.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

Holzbach, A.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

Huang, G.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.

Hunn, T.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Huttenlocher, D. P.

P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vis. 59, 167–181 (2004).
[Crossref]

Ibarz, J.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

Ioannou, Y.

Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.

Isack, H.

H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. Comput. Vis. 97, 123–147 (2012).
[Crossref]

Isack, H. N.

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

Jiaojiao, T.

Y. Xie, T. Jiaojiao, and X. Zhu, “Linking points with labels in 3D: a review of point cloud semantic segmentation,” in IEEE Geoscience and Remote Sensing Magazine (2020).
[Crossref]

Johnson-Roberson, M.

M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.

Jung, J.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Kanan, C.

S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 769–776.

Kim, C.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Klein, R.

R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
[Crossref]

Kohli, P.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

Kolmogorov, V.

C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
[Crossref]

S. Vicente, V. Kolmogorov, and C. Rother, “Joint optimization of segmentation and appearance models,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 755–762.

Koltun, V.

S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5556–5565.

Konolige, K.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Kootstra, G.

G. Kootstra, N. Bergström, and D. Kragic, “Fast and automatic detection and segmentation of unknown objects,” in 10th IEEE-RAS International Conference on Humanoid Robots (2010), pp. 442–447.

Kragic, D.

G. Kootstra, N. Bergström, and D. Kragic, “Fast and automatic detection and segmentation of unknown objects,” in 10th IEEE-RAS International Conference on Humanoid Robots (2010), pp. 442–447.

M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.

Krizhevsky, A.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

Kulvicius, T.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

Kumra, S.

S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 769–776.

Kwon, G.-R.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Laefer, D. F.

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

Lai, K.

K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.

Landrieu, L.

L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 4558–4567.

Larlus, D.

D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.

Latif, Y.

T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.

Le, B.

A. Nguyen and B. Le, “3D point cloud segmentation: a survey,” in 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2013), pp. 225–230.

Lee, A.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Leite, F.

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Levine, S.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

Li, A.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Lin, T.-Y.

P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.

Lindenbaum, M.

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

Liu, L.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Liu, Z.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.

Loghmani, M. R.

M. R. Loghmani, B. Caputo, and M. Vincze, “Recognizing objects in-the-wild: where do we stand?” in International Conference on Robotics and Automation (ICRA) (2018), pp. 2170–2177.

Long, J.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.

Mahler, J.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Makineci, H. B.

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

Marton, Z. C.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

Matl, M.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

Milan, A.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Morrison, D.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Mörwald, T.

A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

Mousavian, A.

Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).

C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.

Nahangi, M.

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Neubeck, A.

A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR) (2006), pp. 850–855.

Neven, D.

D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.

Nguyen, A.

A. Nguyen and B. Le, “3D point cloud segmentation: a survey,” in 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2013), pp. 225–230.

Novotny, D.

D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.

Osokin, A.

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

Papon, J.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.

J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.

Pastor, P.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

Patten, T.

M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.

Pham, T.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.

Pham, T. T.

T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.

Pinheiro, P. O.

P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to segment object candidates,” in Advances in Neural Information Processing Systems (2015), pp. 1990–1998.

P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.

Pock, T.

M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.

Potapova, E.

E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.

Prankl, J.

A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

Proesmans, M.

D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.

Pyeon, M.

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Quillen, D.

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

Rabbani, T.

T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).

Reid, I.

T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.

T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.

Ren, X.

K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.

Richtsfeld, A.

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.

Ritter, H.

A. Ückermann, R. Haschke, and H. Ritter, “Real-time 3D segmentation of cluttered scenes for robot grasping,” in 12th IEEE-RAS International Conference on Humanoid Robots (2012), pp. 198–203.

Rother, C.

C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
[Crossref]

S. Vicente, V. Kolmogorov, and C. Rother, “Joint optimization of segmentation and appearance models,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 755–762.

Rottensteiner, F.

G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
[Crossref]

Rusu, R. B.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

Saglam, A.

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

Sankaran, B.

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Sarkar, S.

K. L. Boyer and S. Sarkar, “Guest editors’ introduction: perceptual organization in computer vision: status, challenges, and potential,” in Computer Vision and Image Understanding (Academic, 1999), vol. 76, pp. 1–5.

Schnabel, R.

R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
[Crossref]

Schoeler, M.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.

J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.

Shan, J.

W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
[Crossref]

S. Ural and J. Shan, “Min-cut based segmentation of airborne LiDAR point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XXXIX-B3, 167–172 (2012).
[Crossref]

Shao, L.

L. Shao, Y. Tian, and J. Bohg, “ClusterNet: 3D instance segmentation in RGB-D images,” arXiv:1807.08894 (2018).

Shelhamer, E.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.

Silberman, N.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

Simonovsky, M.

L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 4558–4567.

Singh, A.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Sohel, F. A.

U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
[Crossref]

Srinivasa, S.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Stein, S. C.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

Stilla, U.

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

Strekalovskiy, E.

E. Strekalovskiy and D. Cremers, “Real-time minimization of the piecewise smooth Mumford-Shah functional,” in European Conference on Computer Vision (ECCV) (2014), pp. 127–141.

Suchi, M.

M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.

Sugimoto, A.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

Sunderhauf, N.

T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.

Taati, B.

Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.

Thomas, D.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

Tian, Y.

L. Shao, Y. Tian, and J. Bohg, “ClusterNet: 3D instance segmentation in RGB-D images,” arXiv:1807.08894 (2018).

Tow, A. W.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Truong-Hong, L.

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

Tseng, Y. H.

M. Wang and Y. H. Tseng, “Incremental segmentation of lidar point clouds with an octree-structured voxel space,” Photogramm. Rec. 26, 32–57 (2011).
[Crossref]

Tuttas, S.

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

Ückermann, A.

A. Ückermann, R. Haschke, and H. Ritter, “Real-time 3D segmentation of cluttered scenes for robot grasping,” in 12th IEEE-RAS International Conference on Humanoid Robots (2012), pp. 198–203.

Unger, M.

M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.

Ural, S.

S. Ural and J. Shan, “Min-cut based segmentation of airborne LiDAR point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XXXIX-B3, 167–172 (2012).
[Crossref]

Van Den Heuvel, F.

T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).

Van Der Maaten, L.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.

Van Gool, L.

A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR) (2006), pp. 850–855.

Vedaldi, A.

D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.

Veksler, O.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
[Crossref]

Verdoja, F.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

Vicente, S.

S. Vicente, V. Kolmogorov, and C. Rother, “Joint optimization of segmentation and appearance models,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 755–762.

Vijay, K.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

Vincze, M.

M. R. Loghmani, B. Caputo, and M. Vincze, “Recognizing objects in-the-wild: where do we stand?” in International Conference on Robotics and Automation (ICRA) (2018), pp. 2170–2177.

E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.

M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.

A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

Vo, A.-V.

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

Vosselman, G.

G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
[Crossref]

T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).

Wahl, R.

R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
[Crossref]

Walsman, A.

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

Wang, L.

W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
[Crossref]

Wang, M.

M. Wang and Y. H. Tseng, “Incremental segmentation of lidar point clouds with an octree-structured voxel space,” Photogramm. Rec. 26, 32–57 (2011).
[Crossref]

Weinberger, K. Q.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.

Werlberger, M.

M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.

Worgotter, F.

J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.

S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.

Wörgötter, F.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

Xiang, Y.

C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.

Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).

Xie, C.

Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).

C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.

Xie, Y.

Y. Xie, T. Jiaojiao, and X. Zhu, “Linking points with labels in 3D: a review of point cloud semantic segmentation,” in IEEE Geoscience and Remote Sensing Magazine (2020).
[Crossref]

Xu, Y.

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

Yang, B.

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

Zabih, R.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
[Crossref]

Zhao, G.

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

Zhou, Q.-Y.

S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5556–5565.

Zhu, X.

Y. Xie, T. Jiaojiao, and X. Zhu, “Linking points with labels in 3D: a review of point cloud semantic segmentation,” in IEEE Geoscience and Remote Sensing Magazine (2020).
[Crossref]

Zillich, M.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.

ACM Trans. Graph. (1)

C. Rother, V. Kolmogorov, and A. Blake, “’‘GrabCut’ interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23, 309–314 (2004).
[Crossref]

Automat. Constr. (1)

T. Czerniawski, B. Sankaran, M. Nahangi, C. Haas, and F. Leite, “6D DBSCAN-based segmentation of building point clouds for planar object classification,” Automat. Constr. 88, 44–58 (2018).
[Crossref]

Comput. Graph. Forum (1)

R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graph. Forum 26, 214–226 (2007).
[Crossref]

Comput. Vis. Image Underst. (1)

Y. Ben-Shabat, T. Avraham, M. Lindenbaum, and A. Fischer, “Graph based over-segmentation methods for 3D point clouds,” Comput. Vis. Image Underst. 174, 12–23 (2018).
[Crossref]

Expert. Syst. Appl. (1)

A. Saglam, H. B. Makineci, N. A. Baykan, and Ö. K. Baykan, “Boundary constrained voxel segmentation for 3D point clouds using local geometric differences,” Expert. Syst. Appl. 157, 113439 (2020).
[Crossref]

IEEE Trans. Robot. (1)

U. Asif, M. Bennamoun, and F. A. Sohel, “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33, 547–564 (2017).
[Crossref]

IEEE. Trans. Pattern. Anal. (1)

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE. Trans. Pattern. Anal. 23, 1222–1239 (2001).
[Crossref]

Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. (1)

T. Rabbani, F. Van Den Heuvel, and G. Vosselman, “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 248–253 (2006).

Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. (2)

S. Ural and J. Shan, “Min-cut based segmentation of airborne LiDAR point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XXXIX-B3, 167–172 (2012).
[Crossref]

A. Dutta, J. Engels, and M. Hahn, “A distance-weighted graph-cut method for the segmentation of laser point clouds,” Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. XL-3, 81–88 (2014).
[Crossref]

Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. (1)

W. Ao, L. Wang, and J. Shan, “Point cloud classification by fusing supervoxel segmentation with multi-scale features,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. XLII-2/W13, 919–925 (2019).
[Crossref]

Int. J. Comput. Vis. (4)

P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” Int. J. Comput. Vis. 59, 167–181 (2004).
[Crossref]

Y. Boykov and G. Funka-Lea, “Graph cuts and efficient ND image segmentation,” Int. J. Comput. Vis. 70, 109–131 (2006).
[Crossref]

H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. Comput. Vis. 97, 123–147 (2012).
[Crossref]

A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. J. Comput. Vis. 96, 1–27 (2012).
[Crossref]

Int. J. Robot. Res. (2)

S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37, 421–436 (2018).
[Crossref]

B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-CMU-Berkeley dataset for robotic manipulation research,” Int. J. Robot. Res. 36, 261–268 (2017).
[Crossref]

ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. (1)

Y. Xu, L. Hoegner, S. Tuttas, and U. Stilla, “Voxel-and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws,” ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV-1/W1, 43–50 (2017).
[Crossref]

ISPRS. J. Photogramm. (3)

A.-V. Vo, L. Truong-Hong, D. F. Laefer, and M. Bertolotto, “Octree-based region growing for point cloud segmentation,” ISPRS. J. Photogramm. 104, 88–100 (2015).
[Crossref]

B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS. J. Photogramm. 99, 45–57 (2015).
[Crossref]

G. Vosselman, M. Coenen, and F. Rottensteiner, “Contextual segment-based classification of airborne laser scanner data,” ISPRS. J. Photogramm. 128, 354–371 (2017).
[Crossref]

Photogramm. Rec. (1)

M. Wang and Y. H. Tseng, “Incremental segmentation of lidar point clouds with an octree-structured voxel space,” Photogramm. Rec. 26, 32–57 (2011).
[Crossref]

Remote Sens. (1)

M. Awrangjeb and C. S. Fraser, “Automatic segmentation of raw LiDAR data for extraction of building roofs,” Remote Sens. 6, 3716–3751 (2014).
[Crossref]

Sensors (1)

C. Kim, A. Habib, M. Pyeon, G.-R. Kwon, J. Jung, and J. Heo, “Segmentation of planar surfaces from laser scanning data using the magnitude of normal position vector for adaptive neighborhoods,” Sensors 16, 140 (2016).
[Crossref]

Other (42)

T. T. Pham, I. Reid, Y. Latif, and S. Gould, “Hierarchical higher-order regression forest fields: an application to 3D indoor scene labelling,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 2246–2254.

L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 4558–4567.

S. Kumra and C. Kanan, “Robotic grasp detection using deep convolutional neural networks,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017), pp. 769–776.

A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR) (2006), pp. 850–855.

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 4700–4708.

A. Milan, T. Pham, K. Vijay, D. Morrison, A. W. Tow, L. Liu, J. Erskine, R. Grinover, A. Gurman, and T. Hunn, “Semantic segmentation from limited training data,” in International Conference on Robotics and Automation (ICRA) (2018), pp. 1908–1915.

D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool, “Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 8837–8845.

D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi, “Semi-convolutional operators for instance segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 86–102.

P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to segment object candidates,” in Advances in Neural Information Processing Systems (2015), pp. 1990–1998.

P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár, “Learning to refine object segments,” in European Conference on Computer Vision (ECCV) (2016), pp. 75–91.

N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision (ECCV) (2012), pp. 746–760.

S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5556–5565.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

T. Pham, T. T. Do, N. Sunderhauf, and I. Reid, “SceneCut: joint geometric and object segmentation for indoor scenes,” in IEEE International Conference on Robotics and Automation (ICRA) (2018), pp. 3213–3220.

E. Potapova, A. Richtsfeld, M. Zillich, and M. Vincze, “Incremental attention-driven object segmentation,” in IEEE-RAS International Conference on Humanoid Robots (2014), pp. 252–258.

M. R. Loghmani, B. Caputo, and M. Vincze, “Recognizing objects in-the-wild: where do we stand?” in International Conference on Robotics and Automation (ICRA) (2018), pp. 2170–2177.

C. Xie, Y. Xiang, A. Mousavian, and D. Fox, “The best of both modes: separately leveraging RGB and depth for unseen object instance segmentation,” in Conference on robot learning (PMLR) (2020), pp. 1369–1378.

Y. Xiang, C. Xie, A. Mousavian, and D. Fox, “Learning RGB-D feature embeddings for unseen object instance segmentation,” arXiv:2007.15157 (2020).

L. Shao, Y. Tian, and J. Bohg, “ClusterNet: 3D instance segmentation in RGB-D images,” arXiv:1807.08894 (2018).

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.

K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation (ICRA) (2011), pp. 1817–1824.

M. Suchi, T. Patten, D. Fischinger, and M. Vincze, “EasyLabel: a semi-automatic pixel-wise object annotation tool for creating robotic RGB-D datasets,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 6678–6684.

A. Aldoma, T. Mörwald, J. Prankl, and M. Vincze, “Segmentation of depth data in piece-wise smooth parametric surfaces,” in Computer Vision Winter Workshop (CVWW) (2015).

K. L. Boyer and S. Sarkar, “Guest editors’ introduction: perceptual organization in computer vision: status, challenges, and potential,” in Computer Vision and Image Understanding (Academic, 1999), vol. 76, pp. 1–5.

S. Vicente, V. Kolmogorov, and C. Rother, “Joint optimization of segmentation and appearance models,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 755–762.

M. Werlberger, T. Pock, M. Unger, and H. Bischof, “A variational model for interactive shape prior segmentation and real-time tracking,” in International Conference on Scale Space and Variational Methods in Computer Vision (Springer, 2009), pp. 200–211.

E. Strekalovskiy and D. Cremers, “Real-time minimization of the piecewise smooth Mumford-Shah functional,” in European Conference on Computer Vision (ECCV) (2014), pp. 127–141.

G. Kootstra, N. Bergström, and D. Kragic, “Fast and automatic detection and segmentation of unknown objects,” in 10th IEEE-RAS International Conference on Humanoid Robots (2010), pp. 442–447.

A. Ückermann, R. Haschke, and H. Ritter, “Real-time 3D segmentation of cluttered scenes for robot grasping,” in 12th IEEE-RAS International Conference on Humanoid Robots (2012), pp. 198–203.

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 4791–4796.

T. Mörwald, A. Richtsfeld, J. Prankl, M. Zillich, and M. Vincze, “Geometric data abstraction using B-splines for range image segmentation,” in International Conference on Robotics and Automation (ICRA) (2013), pp. 148–153.

Y. Xie, T. Jiaojiao, and X. Zhu, “Linking points with labels in 3D: a review of point cloud semantic segmentation,” in IEEE Geoscience and Remote Sensing Magazine (2020).
[Crossref]

A. Golovinskiy and T. Funkhouser, “Min-cut based segmentation of point clouds,” in 12th International Conference on Computer Vision (ICCV) (2009), pp. 39–46.

S. Christoph Stein, M. Schoeler, J. Papon, and F. Worgotter, “Object partitioning using local convexity,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 304–311.

F. Verdoja, D. Thomas, and A. Sugimoto, “Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding,” in International Conference on Multimedia and Expo (ICME) (2017), pp. 1285–1290.

M. Johnson-Roberson, J. Bohg, M. Björkman, and D. Kragic, “Attention-based active 3D point cloud segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2010), pp. 1165–1170.

R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, “Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (2009), pp. 3601–3608.

J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation-supervoxels for point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition (2013), pp. 2027–2034.

M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data,” in International Conference on Robotics and Automation (ICRA) (2019), pp. 7283–7290.

A. Nguyen and B. Le, “3D point cloud segmentation: a survey,” in 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM) (2013), pp. 225–230.

S. C. Stein, F. Wörgötter, M. Schoeler, J. Papon, and T. Kulvicius, “Convexity based object partitioning for robot applications,” in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 3213–3220.

Y. Ioannou, B. Taati, R. Harrap, and M. Greenspan, “Difference of normals as a multi-scale operator in unorganized point clouds,” in 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission (2012), pp. 501–508.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Frame diagram of the proposed segmentation method.
Fig. 2.
Fig. 2. Definition of the local adjacency area for a patch; marks of the same color have the same geometric attributes.
Fig. 3.
Fig. 3. Graph-based scene segmentation. Level 0 represents a fully connected local affinity graph; Level 1 represents that the subgraphs generated after initial segmentation using local area linkage; and Level 2 represents the segmentation result after refinement.
Fig. 4.
Fig. 4. Qualitative results of segmentation methods for the OCID dataset.
Fig. 5.
Fig. 5. Visual segmentation results: (a) FPCS; (b) ours.
Fig. 6.
Fig. 6. Visual segmentation results: (a) LCCP; (b) ours.
Fig. 7.
Fig. 7. Precision-recall curves.
Fig. 8.
Fig. 8. Execution time comparison.
Fig. 9.
Fig. 9. Results of three experiments using an increasing percentage $p$ of outlier samples.

Tables (3)

Tables Icon

Table 1. Characteristics of the Used Object Segmentation Implementations in the Evaluation

Tables Icon

Table 2. Evaluation of Segmentation Results of the OCID Dataset

Tables Icon

Table 3. Results for the Noise Handling Experiment

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

{ C 3 × 3 = 1 k i = 1 k ( p i p ¯ ) ( p i p ¯ ) T p ¯ = 1 k i = 1 k p i ,
{ α 1 D = λ 1 ( C ) λ 2 ( C ) λ 1 ( C ) α 2 D = λ 2 ( C ) λ 3 ( C ) λ 1 ( C ) α 3 D = λ 3 ( C ) λ 1 ( C ) ,
P f = { planar _ patch, if ( α 2 D > α 1 D ) & ( α 2 D > α 3 D ) nonplanar _ patch, otherwise ,
M ij ± = { M ij + , if P fi = P fj & convex ( v i , v j ) M ij , otherwise ,
{ p ¯ h = arg min p h c i I Dist 2 ( c i , p h ) Dist ( c i , p h ) = a h x i + b h y i + c h z i + d h a h 2 + b h 2 + c h 2 ,
T = argmin p P h E ( p ) ,
E ( p ) = c i v i , p c P h Dist ( c i , p c ) data cost + v i , v j V V ( v i , v j ) smooth cost + v i V Ψ v i feature cost ,
Dist ( c i , p c ) = { a c x i + b c y i + c c z i + d c a c 2 + b c 2 + c c 2 , l c 0 2 ρ , l c = 0 ,
V ( v i , v j ) = { 0 , if l v i = l v j 1 , if l v i l v j ,
Ψ v i = κ α 2 D v i .
O v i = max s j { | g i s j | | g i s j | } .
W O v = 1 i | g i | i | g i | O v i .
T P = 1 M i | T P i | | g i | ,
F P = 1 M i | F P i | | s i | ,
F N = 1 M i | F N i | | g i | .
precision = | T P | | T P | + | F P | ,
recall = | T P | | T P | + | F N | ,
F 1 = 2 precision recall precision + recall .

Metrics