miun.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rotational Invariant Object Recognition for Robotic Vision
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
2019 (English)In: ICACR 2019 Proceedings of the 2019 3rd International Conference on Automation, Control and Robots, ACM Digital Library, 2019, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Depth cameras have enhanced the environment perception for robotic applications significantly. They allow to measure true distances and thus enable a 3D measurement of the robot surroundings. In order to enable robust robot vision, the objects recognition has to handle rotated data because object can be viewed from different dynamic perspectives when the robot is moving. Therefore, the 3D descriptors used of object recognition for robotic applications have to be rotation invariant and implementable on the embedded system, with limited memory and computing resources. With the popularization of the depth cameras, the Histogram of Gradients (HOG) descriptor has been extended to recognize also 3D volumetric objects (3DVHOG). Unfortunately, both version are not rotation invariant. There are different methods to achieve rotation invariance for 3DVHOG, but they increase significantly the computational cost of the overall data processing. Hence, they are unfeasible to be implemented in a low cost processor for real-time operation. In this paper, we propose an object pose normalization method to achieve 3DVHOG rotation invariance while reducing the number of processing operations as much as possible. Our method is based on Principal Component Analysis (PCA) normalization. We tested our method using the Princeton Modelnet10 dataset.

Place, publisher, year, edition, pages
ACM Digital Library, 2019. p. 1-6
Keywords [en]
3D Object Recognition, Histogram of Gradients, Princeton Modelnet10, Principal Component Analysis, Pose Normalization, Image Processing, Depth Camera
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-37973DOI: 10.1145/3365265.3365273Scopus ID: 2-s2.0-85076833711ISBN: 978-1-4503-7288-6 (electronic)OAI: oai:DiVA.org:miun-37973DiVA, id: diva2:1377512
Conference
2019 3rd International Conference on Automation, Control and Robots, Prague, Czech Republic, 11-13 October, 2019
Available from: 2019-12-12 Created: 2019-12-12 Last updated: 2020-01-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Vilar, CristianKrug, SilviaThörnberg, Benny

Search in DiVA

By author/editor
Vilar, CristianKrug, SilviaThörnberg, Benny
By organisation
Department of Electronics Design
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 23 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf