Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Robustifying semantic cognition of traversability across wearable RGB-depth cameras

Not Accessible

Your library or personal account may give you access

Abstract

Semantic segmentation represents a promising means to unify different detection tasks, especially pixel-wise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we explore in a comparative study across four perception systems, including a pair of commercial smart glasses, a customized wearable prototype, and two portable red–green–blue-depth (RGB-D) cameras that are being integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap between the concepts of “accuracy” and “robustness” on the critical traversability-related semantic scene understanding. A cluster of efficient deep architectures is proposed, which is built using spatial factorizations, hierarchical dilations, and pyramidal representations. Based on these architectures, this research demonstrates the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view, and detectable depth range.

© 2019 Optical Society of America

Full Article  |  PDF Article
More Like This
Reducing the minimum range of a RGB-depth sensor to aid navigation in visually impaired individuals

Kailun Yang, Kaiwei Wang, Hao Chen, and Jian Bai
Appl. Opt. 57(11) 2809-2819 (2018)

Polarization-driven semantic segmentation via efficient attention-bridged fusion

Kaite Xiang, Kailun Yang, and Kaiwei Wang
Opt. Express 29(4) 4802-4820 (2021)

MAFFNet: real-time multi-level attention feature fusion network with RGB-D semantic segmentation for autonomous driving

Tongfei Lv, Yu Zhang, Lin Luo, and Xiaorong Gao
Appl. Opt. 61(9) 2219-2229 (2022)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (9)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (5)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (5)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All Rights Reserved