miun.sePublikasjoner
Endre søk
Begrens søket
1 - 2 of 2
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Vilar, Cristian
    et al.
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Krug, Silvia
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Thörnberg, Benny
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Rotational Invariant Object Recognition for Robotic Vision2019Inngår i: ICACR 2019 Proceedings of the 2019 3rd International Conference on Automation, Control and Robots, ACM Digital Library, 2019, s. 1-6Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Depth cameras have enhanced the environment perception for robotic applications significantly. They allow to measure true distances and thus enable a 3D measurement of the robot surroundings. In order to enable robust robot vision, the objects recognition has to handle rotated data because object can be viewed from different dynamic perspectives when the robot is moving. Therefore, the 3D descriptors used of object recognition for robotic applications have to be rotation invariant and implementable on the embedded system, with limited memory and computing resources. With the popularization of the depth cameras, the Histogram of Gradients (HOG) descriptor has been extended to recognize also 3D volumetric objects (3DVHOG). Unfortunately, both version are not rotation invariant. There are different methods to achieve rotation invariance for 3DVHOG, but they increase significantly the computational cost of the overall data processing. Hence, they are unfeasible to be implemented in a low cost processor for real-time operation. In this paper, we propose an object pose normalization method to achieve 3DVHOG rotation invariance while reducing the number of processing operations as much as possible. Our method is based on Principal Component Analysis (PCA) normalization. We tested our method using the Princeton Modelnet10 dataset.

  • 2.
    Vilar, Cristian
    et al.
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Thörnberg, Benny
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Krug, Silvia
    Mittuniversitetet, Fakulteten för naturvetenskap, teknik och medier, Institutionen för elektronikkonstruktion.
    Evaluation of embedded camera systems for autonomous wheelchairs2019Inngår i: VEHITS 2019 - Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems, SciTePress , 2019, s. 76-85Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomously driving Power Wheelchairs (PWCs) are valuable tools to enhance the life quality of their users. In order to enable truly autonomous PWCs, camera systems are essential. Image processing enables the development of applications for both autonomous driving and obstacle avoidance. This paper explores the challenges that arise when selecting a suitable embedded camera system for these applications. Our analysis is based on a comparison of two well-known camera principles, Stereo-Cameras (STCs) and Time-of-Flight (ToF) cameras, using the standard deviation of the ground plane at various lighting conditions as a key quality measure. In addition, we also consider other metrics related to both the image processing task and the embedded system constraints. We believe that this assessment is valuable when choosing between using STC or ToF cameras for PWCs.

1 - 2 of 2
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf