miun.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Thörnberg, Benny
Alternative names
Publications (10 of 56) Show all publications
Amir, Y. M. & Thörnberg, B. (2017). High Precision Laser Scanning of Metallic Surfaces. International Journal of Optics, Article ID 4134205.
Open this publication in new window or tab >>High Precision Laser Scanning of Metallic Surfaces
2017 (English)In: International Journal of Optics, ISSN 1687-9384, E-ISSN 1687-9392, article id 4134205Article in journal (Refereed) Published
Abstract [en]

Speckle noise, dynamic range of light intensity, and spurious reflections are major challenges when laser scanners are used for 3D surface acquisition. In this work, a series of image processing operations, that is, Spatial Compound Imaging, High Dynamic Range Extension, Gray Level Transformation, and Most Similar Nearest Neighbor are proposed to overcome the challenges coming from the target surface. A prototype scanner for metallic surfaces is designed to explore combinations of these image processing operations. The main goal is to find the combination of operations thatwill lead to the highest possible robustness andmeasurement precision at the lowest possible computational load. Inspection of metallic tools where the surface of its edge must be measured at micrometer precision is our test case. Precision of heights measured without using the proposed image processing is firstly analyzed to be +/- 7.6 mu m at 68% confidence level. The best achieved height precision was +/- 4.2 mu m. This improvement comes at 24 times longer processing time and five times longer scanning time. Dynamic range extension of the image capture improves robustness since the numbers of saturated or underexposed pixels are substantially reduced. Using a high dynamic range (HDR) camera offers a compromise between processing time, robustness, and precision.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-31363 (URN)10.1155/2017/4134205 (DOI)000405332500001 ()2-s2.0-85024488897 (Scopus ID)
Available from: 2017-08-10 Created: 2017-08-10 Last updated: 2017-11-29Bibliographically approved
Rydblom, S. & Thörnberg, B. (2016). Droplet Imaging Instrument Metrology Instrument for Icing Condition Detection. In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST): . Paper presented at IEEE International Conference on Imaging Systems and Techniques (IST) / IEEE International School on Imaging, OCT 04-06, 2016, Chania, GREECE (pp. 66-71). IEEE, Article ID 7738200.
Open this publication in new window or tab >>Droplet Imaging Instrument Metrology Instrument for Icing Condition Detection
2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST), IEEE, 2016, p. 66-71, article id 7738200Conference paper, Published paper (Refereed)
Abstract [en]

An instrument for measuring water droplets is described and constructed. It is designed to measure the volume concentration and the size distribution of droplets in order to detect icing conditions in a natural fog. The instrument works by shadowgraph imaging, with a collimated blue LED as background illumination. We show how to use a reference object to obtain a calibration of the droplet size and the measurement volume. These properties are derived from a measurement of the object's shadow intensity and its edge second derivative. From the size of every measured droplet and its expected detection volume, a measure of the liquid water content (LWC) and the median volume diameter (MVD) can be estimated. The instrument can be used for continuous measurement in a remote weather-exposed location and is tested in a small environment chamber. We also describe this chamber and how we can change the LWC using an ultrasonic fog generator and a fan.

Place, publisher, year, edition, pages
IEEE, 2016
Series
IEEE International Conference on Imaging Systems and Techniques, ISSN 2471-6162
Keywords
atmospheric measurements, fog chamber, image analysis, liquid water content, machine vision
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-29765 (URN)10.1109/IST.2016.7738200 (DOI)000388735200012 ()2-s2.0-85004010273 (Scopus ID)STC (Local ID)978-1-5090-1817-8 (ISBN)STC (Archive number)STC (OAI)
Conference
IEEE International Conference on Imaging Systems and Techniques (IST) / IEEE International School on Imaging, OCT 04-06, 2016, Chania, GREECE
Available from: 2016-12-22 Created: 2016-12-22 Last updated: 2017-10-12Bibliographically approved
Rydblom, S. & Thörnberg, B. (2016). Liquid Water Content and Droplet Sizing Shadowgraph Measuring System for Wind Turbine Icing Detection. IEEE Sensors Journal, 16(8), 2714-2725, Article ID 7384444.
Open this publication in new window or tab >>Liquid Water Content and Droplet Sizing Shadowgraph Measuring System for Wind Turbine Icing Detection
2016 (English)In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 16, no 8, p. 2714-2725, article id 7384444Article in journal (Refereed) Published
Abstract [en]

This study shows that the liquid water content (LWC) and the median volume diameter (MVD) can be derived from images of water droplets using a shadowgraph imaging system with incoherent LED illumination.

Icing on structures such as a wind turbine is the result of a combination of LWC and MVD and other parameters like temperature, humidity and wind speed. Today, LWC and MVD are not commonly measured for wind turbines. Systems for measuring these properties are often expensive or impractical in terms of location or remote reading. The aim of this study is to gain knowledge about how to design a single instrument based on imaging that has the ability to measure these properties with enough precision and accuracy to detect icing conditions for wind turbines.

A method to calculate both the LWC and the MVD from the same images is described in this paper. The size of one droplet is determined by measuring the shadow created by the droplet in background illumination. The concentration is calculated by counting the measured droplets and estimating the volumes in which these droplets can be observed.

In the described study, the observation volume is shown to be dependent on the particle size and the signal to noise ratio (SNR) for each measured particle. An expected coefficient of variation of the LWC depending on the droplet size is shown to be 2.4 percent for droplets 10 µm in diameter and 1.6 percent for 25 µm droplets. This is based on an error estimation of the laboratory measurements calibrated using a micrometer dot scale.

Place, publisher, year, edition, pages
IEEE Sensors Council, 2016
Keywords
LWC, MVD, Icing, Clouds, Image processing, Machine vision, Meteorology, Optical microscopy, Wind power generation
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-27321 (URN)10.1109/JSEN.2016.2518653 (DOI)000372419100061 ()2-s2.0-84962128668 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Swedish Energy Agency
Available from: 2016-03-22 Created: 2016-03-22 Last updated: 2017-10-12Bibliographically approved
Imran, M., O'Nils, M., Munir, H. & Thörnberg, B. (2015). Low complexity FPGA based background subtraction technique for thermal imagery. In: ACM International Conference Proceeding Series: . Paper presented at 9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454 (pp. 1-6). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Low complexity FPGA based background subtraction technique for thermal imagery
2015 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2015, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2015
Keywords
Background modelling; subtraction; FPGA; architecture; smart camera, thermal imaging.
National Category
Embedded Systems
Identifiers
urn:nbn:se:miun:diva-25997 (URN)10.1145/2789116.2789121 (DOI)2-s2.0-84958251961 (Scopus ID)STC (Local ID)978-145033681-9 (ISBN)STC (Archive number)STC (OAI)
Conference
9th International Conference on Distributed Smart Cameras, ICDSC 2015; Seville; Spain; 8 September 2015 through 11 September 2015; Code 117454
Available from: 2015-09-28 Created: 2015-09-28 Last updated: 2016-12-23Bibliographically approved
Malik, A. W., Thörnberg, B., Anwar, Q., Johansen, T. A. & Shahzad, K. (2015). Real Time Decoding of Color Symbol for Optical Positioning System. International Journal of Advanced Robotic Systems, 12(5)
Open this publication in new window or tab >>Real Time Decoding of Color Symbol for Optical Positioning System
Show others...
2015 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, no 5Article in journal (Refereed) Published
Abstract [en]

This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA) and a microcontroller.  An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

Keywords
Indoor navigation, Reference symbol, Robotic vision
National Category
Robotics
Identifiers
urn:nbn:se:miun:diva-23168 (URN)10.5772/59680 (DOI)000350647600001 ()2-s2.0-84923346270 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2014-10-08 Created: 2014-10-08 Last updated: 2017-10-27Bibliographically approved
Jonsson, P., Thörnberg, B., Dobslaw, F. & Vaa, T. (2015). Road Condition Imaging: Model Development. In: : . Paper presented at Transportation Research Board 2015 Annual Meeting.
Open this publication in new window or tab >>Road Condition Imaging: Model Development
2015 (English)Conference paper, Published paper (Refereed)
Abstract [en]

It is important to classify road conditions to plan winter road maintenance, carry out proper actions and issue warnings to road users. Existing sensor systems only cover parts of the road surface and manual observations can vary depending on those who classify the observations. One challenge is to classify road conditions with automatic monitoring systems. This paper presents a model based on data from winter 2013-2014, retrieved from two installations in Sweden and Norway. To address that challenge an innovative and cost effective road condition imaging system, capable of classifying individual pixels of an image as dry, wet, icy or snowy, is evaluated. The system uses a near infra-red image detector and optical wavelength filters. By combining data from images taken from different wavelength filters it is possible to determine the road status by using multiclass classifiers. One classifier for each road condition was developed, which implies that a pixel can be classified to two or more road conditions at the same time. This multiclass problem is solved by developing a Bayesian Network that uses road weather information system data for the calculation of the probabilities for different road conditions.

Keywords
Road condition, Near Infra Red, classification, remote sensing, Bayesian Networks
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-24250 (URN)STC (Local ID)STC (Archive number)STC (OAI)
Conference
Transportation Research Board 2015 Annual Meeting
Note

Paper number: 15-0885

Presented at the conference in Washington.

Available from: 2015-01-29 Created: 2015-01-29 Last updated: 2016-12-23Bibliographically approved
Jonsson, P., Thörnberg, B. & Casselgren, J. (2015). Road surface status classification using spectral analysis of NIR camera images. IEEE Sensors Journal, 15(3), 1641-1656
Open this publication in new window or tab >>Road surface status classification using spectral analysis of NIR camera images
2015 (English)In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 15, no 3, p. 1641-1656Article in journal (Refereed) Published
Abstract [en]

There is a need for an automated road status classification system considering the vast number of weather-related accidents that occur every winter. Previous research has shown that it is possible to detect hazardous road conditions, including, for example, icy pavements, using single point infrared illumination and infrared detectors. In this paper, we extend this research into camera surveillance of a road section allowing for classification of area segments of weather-related road surface conditions such as wet, snow covered, or icy. Infrared images have been obtained using an infrared camera equipped with a set of optical wavelength filters. The images have primarily been used to develop multivariate data models and also for the classification of road conditions in each pixel. This system is a vast improvement on existing single spot road status classification systems. The resulting imaging system can reliably distinguish between dry, wet, icy, or snow covered sections on road surfaces.

Keywords
Remote sensing, Infrared imaging, Spectral analysis, Image classification
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-24249 (URN)10.1109/JSEN.2014.2364854 (DOI)000348858300008 ()2-s2.0-84921047416 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2015-01-29 Created: 2015-01-29 Last updated: 2017-12-05Bibliographically approved
Malik, A. W., Thörnberg, B., Imran, M. & Lawal, N. (2014). Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA. International Journal of Distributed Sensor Networks, Art. no. 815378
Open this publication in new window or tab >>Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA
2014 (English)In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 815378-Article in journal (Refereed) Published
Abstract [en]

This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering Embedded Systems
Identifiers
urn:nbn:se:miun:diva-20382 (URN)10.1155/2014/815378 (DOI)000330042300001 ()2-s2.0-84893832573 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Funder
Knowledge Foundation
Available from: 2013-11-29 Created: 2013-11-29 Last updated: 2017-12-06Bibliographically approved
Meng, X., Thörnberg, B. & Olsson, L. (2014). Strategic Proactive Obsolescence Management Model. IEEE Transactions on Components, Packaging, and Manufacturing Technology, 4(6), 1099-1108
Open this publication in new window or tab >>Strategic Proactive Obsolescence Management Model
2014 (English)In: IEEE Transactions on Components, Packaging, and Manufacturing Technology, ISSN 2156-3950, E-ISSN 2156-3985, Vol. 4, no 6, p. 1099-1108Article in journal (Refereed) Published
Abstract [en]

When there is a demand larger than the corresponding number of components in stock, obsolescence problems may occur for those systems with a life cycle longer than that of one or more of their components, such as automotive, avionics, military application, etc. This paper discusses the electronic component obsolescence problem and presents a formal mathematical strategic proactive obsolescence management model for long life cycle systems.

The model presented in this paper utilizes redesign and last-time-buy (LTB) as two management methods. LTB cost is estimated by unit cost, demand quantities, buffer, discount rate and holding cost. Redesign cost is associated with component type and quantities.

This model can estimate the minimum management costs for a system with different architectures. It consists of two parts. The first is to generate a graph, which is in the form of an obsolescence management diagram. A segments table containing the data of this diagram is calculated and prepared for optimization at a second step. This second part is to find the minimum cost for system obsolescence management. Mixed integer linear programming (MILP) is used to calculate the minimum management cost and schedule. The model is open sourced allowing other research groups to freely download and modify it.

A display and control system case study is shown to apply this model practically. A reactive manner is presented as a comparison. The result of the strategic proactive management model shows significant cost avoidance as compared to the reactive manner.

Place, publisher, year, edition, pages
IEEE Components, Packaging, and Manufacturing Technology Society, 2014
Keywords
Strategic; DMSMS; Obsolescence; Graph; MILP
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-21016 (URN)10.1109/TCPMT.2014.2316212 (DOI)000337136200017 ()2-s2.0-84902144517 (Scopus ID)STC (Local ID)STC (Archive number)STC (OAI)
Available from: 2014-01-12 Created: 2014-01-12 Last updated: 2017-12-06
Svelander, L. & Thörnberg, B. (2013). Belysningsanordning för fordon. se SE1200160 A1.
Open this publication in new window or tab >>Belysningsanordning för fordon
2013 (Swedish)Patent (Other (popular science, discussion, etc.))
Abstract [sv]

Belysningsanordning för avgivande av elektromagnetisk strålning för belysningsändamål eller liknande från ett fordon. Belysningsanordningen innefattar minst en enhet i vilken elektromagnetisk strålning genereras av minst en strålningsavgivande enhet, vars genererade elektromagnetiska strålning avges från belysningsanordningen av minst en strålningsavgivande enhet under bildandet av minst en ljuskägla vilken är justerbart anordnad med minst ett styrsystem. Belysningsanordningen innefattar minst en bildsensor vars insamlade information bearbetas i minst en datoriserad bildanalysenhet. Ljuskäglans justerbarhet uppnås av att ljuskäglan är uppdelad i ett flertal sektorer och att den avgivna ljusintensiteten i respektive sektor kan styras individuellt med utgångspunkt från information insamlad av bildsensorn och som bearbetats av bildanalysenheten. Patentansökan innefattar även ett förfarande för användning av belysningsanordningen.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-24887 (URN)
Patent
SE SE1200160 A1
Note

Patentet är beviljat i ett slutföreläggande daterat 2014-01-24.Det pågår en tvist om patentet och därför är det ej i kraft ännu.Läs mer om detta på www.prv.se och Patentdatabasen

Available from: 2015-04-24 Created: 2015-04-24 Last updated: 2018-04-19Bibliographically approved
Organisations

Search in DiVA

Show all publications