miun.sePublications
Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Rotational Invariant Object Recognition for Robotic Vision2019In: ICACR 2019 Proceedings of the 2019 3rd International Conference on Automation, Control and Robots, ACM Digital Library, 2019, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Depth cameras have enhanced the environment perception for robotic applications significantly. They allow to measure true distances and thus enable a 3D measurement of the robot surroundings. In order to enable robust robot vision, the objects recognition has to handle rotated data because object can be viewed from different dynamic perspectives when the robot is moving. Therefore, the 3D descriptors used of object recognition for robotic applications have to be rotation invariant and implementable on the embedded system, with limited memory and computing resources. With the popularization of the depth cameras, the Histogram of Gradients (HOG) descriptor has been extended to recognize also 3D volumetric objects (3DVHOG). Unfortunately, both version are not rotation invariant. There are different methods to achieve rotation invariance for 3DVHOG, but they increase significantly the computational cost of the overall data processing. Hence, they are unfeasible to be implemented in a low cost processor for real-time operation. In this paper, we propose an object pose normalization method to achieve 3DVHOG rotation invariance while reducing the number of processing operations as much as possible. Our method is based on Principal Component Analysis (PCA) normalization. We tested our method using the Princeton Modelnet10 dataset.

  • 2.
    Vilar, Cristian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krug, Silvia
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Evaluation of embedded camera systems for autonomous wheelchairs2019In: VEHITS 2019 - Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems, SciTePress , 2019, p. 76-85Conference paper (Refereed)
    Abstract [en]

    Autonomously driving Power Wheelchairs (PWCs) are valuable tools to enhance the life quality of their users. In order to enable truly autonomous PWCs, camera systems are essential. Image processing enables the development of applications for both autonomous driving and obstacle avoidance. This paper explores the challenges that arise when selecting a suitable embedded camera system for these applications. Our analysis is based on a comparison of two well-known camera principles, Stereo-Cameras (STCs) and Time-of-Flight (ToF) cameras, using the standard deviation of the ground plane at various lighting conditions as a key quality measure. In addition, we also consider other metrics related to both the image processing task and the embedded system constraints. We believe that this assessment is valuable when choosing between using STC or ToF cameras for PWCs.

1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf