Mid Sweden University

miun.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Xie, Y., Nie, Y., Lundgren, J., Yang, M., Zhang, Y. & Chen, Z. (2024). Cervical Spondylosis Diagnosis Based on Convolutional Neural Network with X-ray Images. Sensors, 24(11), Article ID 3428.
Open this publication in new window or tab >>Cervical Spondylosis Diagnosis Based on Convolutional Neural Network with X-ray Images
Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 11, article id 3428Article in journal (Refereed) Published
Abstract [en]

The increase in Cervical Spondylosis cases and the expansion of the affected demographic to younger patients have escalated the demand for X-ray screening. Challenges include variability in imaging technology, differences in equipment specifications, and the diverse experience levels of clinicians, which collectively hinder diagnostic accuracy. In response, a deep learning approach utilizing a ResNet-34 convolutional neural network has been developed. This model, trained on a comprehensive dataset of 1235 cervical spine X-ray images representing a wide range of projection angles, aims to mitigate these issues by providing a robust tool for diagnosis. Validation of the model was performed on an independent set of 136 X-ray images, also varied in projection angles, to ensure its efficacy across diverse clinical scenarios. The model achieved a classification accuracy of 89.7%, significantly outperforming the traditional manual diagnostic approach, which has an accuracy of 68.3%. This advancement demonstrates the viability of deep learning models to not only complement but enhance the diagnostic capabilities of clinicians in identifying Cervical Spondylosis, offering a promising avenue for improving diagnostic accuracy and efficiency in clinical settings.

Place, publisher, year, edition, pages
MDPI AG, 2024
Keywords
cervical spondylosis, X-ray classification, multi-label, deep learning
National Category
Radiology, Nuclear Medicine and Medical Imaging Computer graphics and computer vision
Identifiers
urn:nbn:se:miun:diva-51455 (URN)10.3390/s24113428 (DOI)001245644300001 ()2-s2.0-85195868888 (Scopus ID)
Available from: 2024-06-06 Created: 2024-06-06 Last updated: 2025-09-25Bibliographically approved
Gatner, O., Shallari, I., Nie, Y., O'Nils, M. & Imran, M. (2024). Method for Capturing Measured LiDAR Data with Ground Truth for Generation of Big Real LiDAR Data Sets. In: Conference Record - IEEE Instrumentation and Measurement Technology Conference: . Paper presented at Conference Record - IEEE Instrumentation and Measurement Technology Conference. IEEE conference proceedings
Open this publication in new window or tab >>Method for Capturing Measured LiDAR Data with Ground Truth for Generation of Big Real LiDAR Data Sets
Show others...
2024 (English)In: Conference Record - IEEE Instrumentation and Measurement Technology Conference, IEEE conference proceedings, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The development of machine learning has resulted in data gaining a pivotal role in the technological advancement, especially data where the ground truth of targeted parameters can be efficiently captured. This requires the development of methods that facilitate accurate data collection with ground truth. Under this perspective, Time of Flight sensors pose a high complexity due to the multifaceted nature of noise in the captured data. To enable the use of such sensors in a wide range of applications including Artificial Intelligence, we need to provide also accurate ground truth data. In this article, we present a method for automated data capturing from a LiDAR sensor together with ground truth data generation. This method will facilitate generating big datasets from LiDAR sensors with high accuracy ground truth data. In addition, we provide a dataset that aside from depth sensor data contains also RGB, confidence and infrared data captured from the LiDAR sensor. As a result, the proposed method not only facilitates data capturing but it enables to generate accurate ground truth data, with RMSE of only 0.04 m at 1.3 m distance. 

Place, publisher, year, edition, pages
IEEE conference proceedings, 2024
Keywords
3D, confidence data, denoising, ground truth, LiDAR, point cloud, Time of Flight
National Category
Computer Sciences
Identifiers
urn:nbn:se:miun:diva-52053 (URN)10.1109/I2MTC60896.2024.10561218 (DOI)001261521400360 ()2-s2.0-85197770162 (Scopus ID)9798350380903 (ISBN)
Conference
Conference Record - IEEE Instrumentation and Measurement Technology Conference
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-09-25Bibliographically approved
Nie, Y., O'Nils, M., Gatner, O., Imran, M. & Shallari, I. (2024). Multi-Path Interference Denoising of LiDAR Data Using a Deep Learning Based on U-Net Model. In: 2024 IEEE International Instrumentation and Measurement Technology Conference (I2MTC): . Paper presented at Conference Record - IEEE Instrumentation and Measurement Technology Conference. IEEE conference proceedings
Open this publication in new window or tab >>Multi-Path Interference Denoising of LiDAR Data Using a Deep Learning Based on U-Net Model
Show others...
2024 (English)In: 2024 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), IEEE conference proceedings, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Eliminating Multi-Path Interference (MPI) stands as a significant unresolved challenge in the domain of depth estimation using Time-of-Flight (ToF) cameras. ToF data is typically influenced by significant noise and artifacts stemming from MPI. Although a variety of conventional methods have been suggested to enhance ToF data quality, the application of machine learning techniques has been infrequent, primarily due to the scarcity of authentic training data with accurate depth information. This paper introduces an approach that eliminates the dependency on labeled real-world data within the learning framework. We employ a U-Net trained on the data with ground truth in a supervised manner, enabling it to leverage multi-frequency ToF data for MPI correction. Concurrently, we compare three channels as input with one channel and two channels. Our experimental results convincingly showcase the effectiveness of this approach in reducing noise in real-world data.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2024
Keywords
depth, fusion, LiDAR, MPI, U-Net
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-52052 (URN)10.1109/I2MTC60896.2024.10560867 (DOI)001261521400171 ()2-s2.0-85197742945 (Scopus ID)9798350380903 (ISBN)
Conference
Conference Record - IEEE Instrumentation and Measurement Technology Conference
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-09-25Bibliographically approved
Nie, Y., Sommella, P., Carratù, M., O'Nils, M. & Lundgren, J. (2023). A Deep CNN Transformer Hybrid Model for Skin Lesion Classification of Dermoscopic Images Using Focal Loss. Diagnostics, 13(1), Article ID 72.
Open this publication in new window or tab >>A Deep CNN Transformer Hybrid Model for Skin Lesion Classification of Dermoscopic Images Using Focal Loss
Show others...
2023 (English)In: Diagnostics, ISSN 2075-4418, Vol. 13, no 1, article id 72Article in journal (Refereed) Published
Abstract [en]

Skin cancers are the most cancers diagnosed worldwide, with an estimated > 1.5 million new cases in 2020. Use of computer-aided diagnosis (CAD) systems for early detection and classification of skin lesions helps reduce skin cancer mortality rates. Inspired by the success of the transformer network in natural language processing (NLP) and the deep convolutional neural network (DCNN) in computer vision, we propose an end-to-end CNN transformer hybrid model with a focal loss (FL) function to classify skin lesion images. First, the CNN extracts low-level, local feature maps from the dermoscopic images. In the second stage, the vision transformer (ViT) globally models these features, then extracts abstract and high-level semantic information, and finally sends this to the multi-layer perceptron (MLP) head for classification. Based on an evaluation of three different loss functions, the FL-based algorithm is aimed to improve the extreme class imbalance that exists in the International Skin Imaging Collaboration (ISIC) 2018 dataset. The experimental analysis demonstrates that impressive results of skin lesion classification are achieved by employing the hybrid model and FL strategy, which shows significantly high performance and outperforms the existing work. 

Keywords
deep learning, focal loss, hybrid model, skin lesion
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:miun:diva-46860 (URN)10.3390/diagnostics13010072 (DOI)000908965800001 ()2-s2.0-85145859869 (Scopus ID)
Available from: 2023-01-17 Created: 2023-01-17 Last updated: 2025-09-25Bibliographically approved
Nie, Y. (2023). Deep Learning Approaches towards Skin Lesion Classification with Dermoscopic Images. (Doctoral dissertation). Sundsvall: Mid Sweden University
Open this publication in new window or tab >>Deep Learning Approaches towards Skin Lesion Classification with Dermoscopic Images
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Melanoma is a skin cancer that tends to be deadly. The incidence of melanoma is currently at the highest level ever recorded in Europe, North America and Oceania. The survival rate can be significantly increased if skin lesions are identified in dermoscopic images at an early stage. In the other hand, the classification of skin lesions is incredibly challenging. Skin lesion classification using deep learning approaches has provided better results in classifying skin diseases than those of dermatologist, which is lifesaving in terms of diagnosis.

This thesis presents a review of our research articles on classifying skin lesions using deep learning. Regarding the research, I have four goals concerning research frontier work, small datasets, data imbalance, and improving accuracy. In this thesis, I discuss how deep learning can classify skin diseases, summarizing the problems that remain at this stage and the outlook for the future.

For the above goals, I first studied and summarized more than 200 highguality articles published over five years. I then used three versions of You only look once (Yolo) to detect skin lesions. Although there were only 200 pictures, the test was very effective for detection. I applied the five-fold algorithm to Vgg_16, trained five models, and fused them so solve the small data problem. To improve the accuracy, I also tried to combine the traditional machine learning method, i.e., the seven-point checklist, with three different backbones. Since the learning rate. Then, I also tried to use the hybrid model, combining convolutional neural networks (CNN) and transformer to train the dataset, and applied focal loss to balance the extremely unbalanced weight of the data.

In addition to high-quality data sets and high-performance computers being extremely important in the research and application of deep learning, the optimization of machine learning algorithms for skin lesions can be endless

Abstract [sv]

Melanom är en form av hudcancer som tenderar att vara dödlig. Förekomsten av melanom är för närvarande på den högsta nivån som någonsin registrerats i Europa, Nordamerika och Oceanien. Chansen för överlevnad ökar avsevärt om hudskadorna identifieras i dermatoskopiska bilder i ett tidigare skede, men klassificering av hudskador är otroligt utmanande. Med metoder för djupinlärning har klassificering av hudsjukdomar i vissa fall gett bättre resultat än hudläkares diagnoser, vilket ger större möjligheter att rädda liv.

Denna avhandling presenterar en genomgång av våra forskningsartiklar om klassificering av hudskador med hjälp av djupinlärning. När det gäller vår forskning har jag fyra mål som handlar om forskningens frontlinjearbete, små datamängder, obalans i data och om att förbättra noggrannheten. I detta avhandlingsarbete diskuterar jag hur djupinlärning kan klassificera hudsjukdomar, sammanfattar de problem som kvarstår i detta skede och diskuterar utsikterna för framtiden.

För ovanstående mål studerade och sammanfattade jag först mer än 200 högkvalitativa artiklar publicerade under fem år. Jag använde sedan tre versioner av You only look once (Yolo) för att upptäcka hudskador. Även om det bara fanns 200 bilder var testet mycket effektivt för upptäckt. Jag tillämpade en femdelad algoritm på Vhh-16, tränade fem modeller och sammanfogade dem för att lösa problemet med små datamängder. För att förbättra noggrannheten försökte jag också kombinera en sjupunkts checklista, förstärkt med maskininlärning, med tre olika grundstommar. Eftersom inlärningshastigheten starkt påverkar modellträningen använde jag cosinus-inlärningshastigheten. Sedan försökte jag också använda hybridmodellen, som kombinerade konvolutionella neurala nätverk (CNN) och transformator för att träna dataset, och tillämpade fokalförlust för att balansera den extremt obalanserade vikten av datan.

Förutom att högkvalitativa datamängder och högpresterande datorer är extremt viktiga i forskningen och tillämpningen av djupinlärning, kan optimeringen av maskininlärningsalgoritmer för hudskador vara oändliga.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University, 2023. p. 51
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 383
National Category
Medical Imaging
Identifiers
urn:nbn:se:miun:diva-46957 (URN)978-91-89341-86-9 (ISBN)
Public defence
2023-02-16, C312, Holmgatan 10, Sundsvall, 09:00 (English)
Supervisors
Available from: 2023-01-20 Created: 2023-01-19 Last updated: 2025-09-25Bibliographically approved
Xie, Y., Ma, Y., Yang, J., Nie, Y., Chen, Z., Zhang, C. & Zuo, L. (2022). Development and validation of multilayer perceptual neural network in glomerular filtration rate evaluation. Chinese Journal of Nephrology, 38(5), 369-378
Open this publication in new window or tab >>Development and validation of multilayer perceptual neural network in glomerular filtration rate evaluation
Show others...
2022 (English)In: Chinese Journal of Nephrology, ISSN 1001-7097, Vol. 38, no 5, p. 369-378Article in journal (Refereed) Published
Abstract [en]

Objective To develop a neural network model for the evaluation of glomerular filtration rate (GFR) based on multilayer perceptual neural network, and to compare with the improved Chinese based creatinine GFR evaluation formula (C - GFRcr) and the evaluation formula (EPI - GFRcr) of the American Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) for the clinical applicability of multilayer perceptual neural network model in evaluating GFR. Methods A total of 684 chronic kidney disease (CKD) patients used for developing a modified version of China′s based creatinine GFR evaluation formula were taken as the research object. The data of 454 patients were randomly selected as the development group and the data of the other 230 patients were as the verification group. The multilayer perceptual neural network GFR evaluation model (M - GFRcr) was established. With the double plasma GFR as the reference value (rGFR), the correlation, mean difference, mean absolute difference, precision and accuracy of C - GFRcr, EPI - GFRcr and M - GFRcr were compared. Results Among the 684 CKD patients, there were 352 males and 332 females, with age of (49.9 ± 15.8) years. The correlation between M - GFRcr and rGFR was the highest (Pearson correlation =0.93, P<0.001). The mean difference of M - GFRcr was lower than that of C - GFRcr (Z=9.929, P<0.001) and EPI - GFRcr (Z=10.573, P<0.001). The mean absolute difference of M - GFRcr was also lower than that of C - GFRcr (Z=3.953, P<0.001) and EPI - GFRcr (Z=4.210, P<0.001). The accuracy of ± 15% of M - GFRcr was higher than that of C - GFRcr (χ2=26.068, P<0.001) and EPI - GFRcr (χ2=23.154, P<0.001). The accuracy of ±30% of M-GFRcr was also higher than that of C-GFRcr (χ2=8.264, P=0.001) and EPI-GFRcr (χ2=11.963, P=0.001). The results of different stages of CKD showed that in the early stage of CKD (CKD 1-2), the mean difference of M - GFRcr was lower than that of C - GFRcr (Z=7.401, P<0.001) and EPI - GFRcr (Z=8.096, P < 0.001); the mean absolute difference of M - GFRcr was also lower than that of C - GFRcr (Z=4.723, P<0.001) and EPI - GFRcr (Z=4.946, P<0.001); the accuracy of ±15% of M - GFRcr was higher than that of C - GFRcr (χ2=23.547, P<0.001) and EPI - GFRcr (χ2=26.421, P<0.001); the accuracy of ± 30% of M - GFRcr was also higher than that of C - GFRcr (χ2=12.089, P=0.001) and EPI - GFRcr (χ2=16.168, P < 0.001). But there was no significant difference in the applicability among C - GFRcr, EPI - GFRcr and M - GFRcr in the advanced stages of CKD (CKD 3-5). Conclusion Compared with the improved Chinese based creatinine GFR evaluation formula C - GFRcr and CKD - EPI evaluation formula EPI - GFRcr, the accuracy of multilayer perceptual neural network model to evaluate GFR in CKD patients has been significantly improved, especially in CKD 1-2 stage. 

Place, publisher, year, edition, pages
Chinese Medical Journals Publishing House Co.Ltd, 2022
Keywords
Glomerular filtration rate, Neural networks (Computer), Renal insufficiency, chronic
National Category
Clinical Medicine
Identifiers
urn:nbn:se:miun:diva-49709 (URN)10.3760/cma.j.cn441217-20210525-00054 (DOI)2-s2.0-85174506824 (Scopus ID)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2025-09-25Bibliographically approved
Nie, Y., Sommella, P., Carratu, M., Ferro, M., O'Nils, M. & Lundgren, J. (2022). Recent Advances in Diagnosis of Skin Lesions using Dermoscopic Images based on Deep Learning. IEEE Access, 10, 95716-95747
Open this publication in new window or tab >>Recent Advances in Diagnosis of Skin Lesions using Dermoscopic Images based on Deep Learning
Show others...
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 95716-95747Article in journal (Refereed) Published
Abstract [en]

Skin cancer is one of the most threatening cancers, which spreads to the other parts of the body if not caught and treated early. During the last few years, the integration of deep learning into skin cancer has been a milestone in health care, and dermoscopic images are right at the center of this revolution. This review study focuses on the state-of-the-art automatic diagnosis of skin cancer from dermoscopic images based on deep learning. This work thoroughly explores the existing deep learning and its application in diagnosing dermoscopic images. This study aims to present and summarize the latest methodology in melanoma classification and the techniques to improve this. We discuss advancements in deep learning-based solutions to diagnose skin cancer, along with some challenges and future opportunities to strengthen these automatic systems to support dermatologists and enhance their ability to diagnose skin cancer. Author

Keywords
Biomedical imaging, Cancer, Classification, Convolutional neural networks, Deep learning, Dermatology, Dermoscopy images, Image color analysis, Image recognition, Lesions, Literature review, Melanoma, Skin, Skin cancer
National Category
Radiology, Nuclear Medicine and Medical Imaging Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-45974 (URN)10.1109/ACCESS.2022.3199613 (DOI)000860813300001 ()2-s2.0-85136647170 (Scopus ID)
Available from: 2022-09-06 Created: 2022-09-06 Last updated: 2025-09-25Bibliographically approved
Nie, Y., Carratu, M., O'Nils, M., Sommella, P., Moise, A. U. & Lundgren, J. (2022). Skin Cancer Classification based on Cosine Cyclical Learning Rate with Deep Learning. In: Conference Record - IEEE Instrumentation and Measurement Technology Conference: . Paper presented at 2022 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2022, 16 May 2022 through 19 May 2022. IEEE
Open this publication in new window or tab >>Skin Cancer Classification based on Cosine Cyclical Learning Rate with Deep Learning
Show others...
2022 (English)In: Conference Record - IEEE Instrumentation and Measurement Technology Conference, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Since early-stage skin cancer identification can improve melanoma prognosis and significantly reduce treatment costs, AI-based diagnosis systems might greatly benefit patients suffering from suspicious skin lesions. The study proposes a cosine cyclical learning rate with a skin cancer classification model to improve melanoma prediction. The contributions of models involve three critical CNNs, which are standard deep feature extraction modules for the skin cancer classification in this study (Vgg19, ResNet101 and InceptionV3). Each CNN model applies three different learning rates: fixed learning rate(LR), Cosine Annealing LR, and Cosine Annealing with WarmRestarts. HAM10000 is a large collection of publicly available dermoscopic images dataset used for our experiments. The performance of the proposed approach was appraised through comparative experiments. The outcome has indicated that the proposed method has high efficiency in diagnosing skin lesions with a cosine cyclical learning rate. 

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
cosine cyclical learning rate, deep learning, dermoscopic images, HAM10000, skin cancer
National Category
Computer Engineering Cancer and Oncology
Identifiers
urn:nbn:se:miun:diva-45757 (URN)10.1109/I2MTC48687.2022.9806568 (DOI)000844585400099 ()2-s2.0-85134427579 (Scopus ID)9781665483605 (ISBN)
Conference
2022 IEEE International Instrumentation and Measurement Technology Conference, I2MTC 2022, 16 May 2022 through 19 May 2022
Available from: 2022-08-03 Created: 2022-08-03 Last updated: 2025-09-25Bibliographically approved
Nie, Y. (2021). Automatic Melanoma Diagnosis in Dermoscopic Imaging Base on Deep Learning System. (Licentiate dissertation). Mid Sweden University
Open this publication in new window or tab >>Automatic Melanoma Diagnosis in Dermoscopic Imaging Base on Deep Learning System
2021 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Melanoma is one of the deadliest forms of cancer. Unfortunately, its incidence rates have been increasing all over the world. One of the techniques used by dermatologists to diagnose melanomas is an imaging modality called dermoscopy. The skin lesion is inspected using a magnification device and a light source. This technique makes it possible for the dermatologist to observe subcutaneous structures that would be invisible otherwise. However, the use of dermoscopy is not straightforward, requiring years of practice. Moreover, the diagnosis is many times subjective and challenging to reproduce. Therefore, it is necessary to develop automatic methods that will help dermatologists provide more reliable diagnoses. 

Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. Recent developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in the clinical diagnostic ability to the point that it can detect melanoma in the clinic at the earliest stages. This technology’s global adoption has allowed the accumulation of extensive collections of dermoscopy images. The development of advanced technologies in image processing and machine learning has given us the ability to distinguish malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow earlier detection of melanoma and reduce a large number of unnecessary and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, a widespread implementation must await further technical progress in accuracy and reproducibility. 

This thesis provides an overview of our deep learning (DL) based methods used in the diagnosis of melanoma in dermoscopy images. First, we introduce the background. Then, this paper gives a brief overview of the state-of-art article on melanoma interpret. After that, a review is provided on the deep learning models for melanoma image analysis and the main popular techniques to improve the diagnose performance. We also made a summary of our research results. Finally, we discuss the challenges and opportunities for automating melanocytic skin lesions’ diagnostic procedures. We end with an overview of a conclusion and directions for the following research plan. 

Place, publisher, year, edition, pages
Mid Sweden University, 2021. p. 32
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 180
Keywords
Melanoma classification, computer vision, Deep learning, CNN
National Category
Dermatology and Venereal Diseases Medical Imaging Computer Engineering
Identifiers
urn:nbn:se:miun:diva-41751 (URN)978-91-89341-00-5 (ISBN)
Presentation
2021-04-23, C312, Holmgatan 10, Sundsvall, 13:00 (English)
Opponent
Supervisors
Available from: 2021-03-29 Created: 2021-03-26 Last updated: 2025-09-25Bibliographically approved
Nie, Y., Ferro, M., Sommella, P., Carratù, M., Cacciapuoti, S., Di Leo, G., . . . Fabbrocini, G. (2021). Ensembling CNNs for dermoscopic analysis of suspicious skin lesions. In: 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA): . Paper presented at MeMeA 2021, The 16th edition of IEEE International Symposium on Medical Measurements and Applications, [DIGITAL] Neuchâtel, Switzerland, June 25-28, 2021.. IEEE
Open this publication in new window or tab >>Ensembling CNNs for dermoscopic analysis of suspicious skin lesions
Show others...
2021 (English)In: 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

Deep Convolution Neural Networks (CNN) enable advanced methods to predict the skin cancer classes through the automatic analysis of digital dermoscopic images. However, small datasets' availability often allows the models to be characterized by low prediction accuracy and poor generalization ability, which significantly influences clinical decisions. This paper proposes to use an original ensembling of multiple CNNs as feature extractors able to detect and measure skin lesions atypical criteria according to the well-known diagnostic method 7-Point Check List. The experimental results show that the Artificial Intelligence-based model can suitably manage the classification uncertainty of the single CNNs and finally distinguish melanomas from benignant nevi. Diagnostic performance is promising in terms of sensitivity and specificity towards a decision-supporting system used by a dermatologist with low experience during clinical practice.

Place, publisher, year, edition, pages
IEEE, 2021
National Category
Medical Engineering
Identifiers
urn:nbn:se:miun:diva-41757 (URN)10.1109/MeMeA52024.2021.9478760 (DOI)000847048100106 ()2-s2.0-85114127281 (Scopus ID)978-1-6654-1914-7 (ISBN)
Conference
MeMeA 2021, The 16th edition of IEEE International Symposium on Medical Measurements and Applications, [DIGITAL] Neuchâtel, Switzerland, June 25-28, 2021.
Available from: 2021-03-29 Created: 2021-03-29 Last updated: 2025-09-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1840-791X

Search in DiVA

Show all publications