Startseite Mathematik A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps
Artikel Open Access

A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps

  • Ali. H. Alzamili EMAIL logo und Nur Intan Raihana Ruhaiyem
Veröffentlicht/Copyright: 20. Februar 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Skin cancer especially when detected early can be easily treated, but its diagnosis is complicated by the minimal difference in the appearance of early lesions and the requirement of a precise diagnostic technique. The goal of this intensive literature review is to evaluate the progressive enhancements of deep learning (DL) and machine learning (ML) methods for transferring early-stage skin cancer identification in terms of accuracy and in terms of usability for real-world clinical applications. By using support vector machines, convolutional neural networks, and ensemble methods, we assess the performance of such algorithms in the classification and segmentation of skin lesions within various datasets. The challenges outlined in the review include the following: first, sparsity of data, second, variation in the looks in lesions, and third, imbalance of data within classes. Furthermore, issues that are still open to investigation are also presented, including the restricted number of algorithms for which the developed DL/ML models can be interpretable and the variability of the results assessment criteria used in different investigations. We then propose possible approaches to these issues such as data augments, multimodal learning, and the inclusion of explainable artificial intelligence approaches. The strengths of the present study consist of a comprehensive review of the limitations of contemporary methodologies and recommendations for future research on DL/ML-based systems for the early diagnosis of skin cancer. This research aims to highlight the best techniques and identify areas for future improvement. The study highlighted the key challenges of evaluating skin lesion segmentation and classification techniques, for instance, small sample size dataset, or selective and random image acquisition or even racial prejudice.

1 Introduction

In the recent past, machine learning (ML) algorithms in computer vision have enhanced the development of computer-aided diagnosis and early detection system for dangerous cancerous skin diseases [1]. Skin cancer have been diagnosed and detected through human examination, probably a visual examination of the affected skin. These approaches of visual inspection and screening the lesion images by dermatologists are tedious and time taking, having complex methodologies, and most importantly, the results depend on the observational ability of the dermatologists and as such the conversion of the images into binary form can also have some errors [2]. This is mainly attributed to the fact that the skin lesion images can be very much detailed. Before analysis, interpretation, and understanding of skin lesion images, the lesion pixels must be uniquely defined. This may be difficult due to the following reasons: First, in skin lesion images, there are body hair, blood vessels, oils, bubbles, and other noise existence that affects segmentation process. Also low contrast between the surrounding skin and the lesion area complicates the process of segmentation of the lesion. Then, skin lesions can be median, macular, or popular and may be symmetrical, oval, and multiple, and the surface may have distinct colours, which makes the shape, size, and colour restriction lower the performance of the methods to a higher level of accuracy [3].

It has made a manual screening a problem for the dermatologist and has thus made an automatic computerized diagnostic system highly essential in the analysis of skin lesions; this will assist and help the dermatologist in quick decision making. This has enhanced the clinicians in reaching faster and accurate in detecting skin cancer cases. Melanoma cancer is very lethal and deadly, but then, it is very easy to eliminate if diagnosed early enough. In the recent past, there have been tremendous advances in the use of computerized methods that are used in the diagnosis of skin cancer. The traditional method for architectural features analysis of medical images is performed by means of a sequence of low-level pixel level operations [4].

The pre-processing stage includes the operations like contrast and intensity, binarization, morphological operation, colour grey-scaling, and data augmentation. In this stage, the noises and other interferences are attained from the images and the sizes are standardized to avoid complex computation [5]. After pre-processing, the image segmentation is carried out with an intent to separate regions of interest by eradicating healthy tissue of the body in case of diseased regions [5]. This step is useful for discerning between normal tissues when extracting features for the lesion for diagnosis. The skin lesion analysis used several forms of traditional segmentation methodology, which include the thresholding techniques, clustering, edge and region-based techniques, and the most recent approach known as ABCDE. However, such techniques tend to fail at correctly segmenting the melanoma region due to religious work at isolating the skin lesion and its complicated texture. In the recent past, more sophisticated segmentation techniques like fuzzy logic, genetic algorithms, artificial neural networks (ANNs), and more recent advanced deep learning (DL) have been used in segmentation of skin lesion with higher accuracy and reliability [6]. The lesion characteristics are obtained by applying feature extraction techniques which are very crucial in identifying the kind of skin cancer present in images. Some of the typical feature extraction techniques are template matching, image transformations, graph based, projection histograms, contour, Fourier descriptors, gradient features, and Gabor feature techniques [7]. These extracted features are then fed to the classification techniques which are used to determine the type of skin tumour present in the lesion. Some past approaches on skin lesion images classification involve decision trees (DTs), fuzzy classifiers, support vector machines (SVMs), and ANN [8]. In the recent past, DL approaches have gained higher accuracy score in skin lesion analysis to even diagnose skin diseases by photographs. Their successes are attributed by their capability to learn and acquire intricate and hierarchical features from image data sets. For example, deep convolutional neural networks, which is a widely used DL technique, is particularly good at dealing with general and highly variable tasks involved in fine-grained objects, which is one of the significant features of skin lesion images. This pre-eminent method can present the most obvious features of whole skin lesion images more accurately than manually crafted features. Convolutional neural networks (CNNs) are basically formed of varied layers that take input data in the form of images and produce outputs such as patterns of the disease [9].

In this study, we have provided a review of various types of methods applied in the melanoma skin lesion image analysis to critically appraise and compile the present status of DL and ML approaches for the screening of skin cancer at its initial stage. This study aims at assessing the different approaches that have been used, the issues that have been encountered in the application of these procedures, and the existing research voids. Also, we present valuable suggestions on further developments in diagnosis of skin cancer, which may improve specificity, sensitivity as well as practical utility when applied early in the skin cancer development process to promote better outcomes among patients. The main contributions of this review are as follows:

  • This is important because early-stage skin cancer detection is one of the few applications of DL and ML that our review is focused on. In contrast to prior reviews that could consider a range of issues or various cancer types at large, or focus on a certain cancer type at large, with no specific reference to early diagnosis, our emphasis on early detection is aimed at emphasizing the role of early diagnosis for better outcomes for a patient.

  • Our work thus presents an objective analysis of several methods, thereby helping compare the merits and demerits of the existing state of studies. This broad approach is crucial for purposes of defining research gaps and for guiding future research.

  • In particular, the benefits of employing free-from heuristic, white-box, and hybrid explainable artificial intelligence (XAI) methods in clinical environment are described in relation to the usefulness of the approach in making the models more understandable to clinicians. This aspect has not been well explored in the literature, and our review will be useful for practicing clinicians who want to incorporate AI tools in practice.

  • The study highlighted the key challenges of evaluating skin lesion segmentation and classification techniques, such as small sample size dataset, or selective and random image acquisition or even racial prejudice.

  • Our study conducted a comprehensive analysis of various approaches used for skin lesion analysis in melanoma detection. We performed a comparative analysis of both traditional and DL-based techniques.

  • A comprehensive review of DL and ML techniques for early-stage skin cancer detection: challenges and research gaps.

This article is organized as follows: Section 2 presents the human skin details. The pre-processing methods for skin cancer are presented in Section 3. Section 4 provides the details of segmentation methods of skin cancer. The feature extraction techniques used for skin detection are presented in Section 5. Section 6 provides the details of skin cancer classification. The types of computer-aided diagnostic (CAD) systems used for skin cancer detection are presented in Section 7. Section 8 presents the role of AI for skin cancer identification and types of datasets used. The research challenges for skin cancer detection are presented in Section 9. Section 10 presents the future directions and suggestion methods for skin cancer diagnosis. Finally, the conclusion of the review study and findings are presented in Section 11.

2 The human skin

There two major layers that make up the skin of humans, including the dermis and epidermis. The compositions of the dermis include elastic fibres as well as collagen, which is a kind of protein. There are two sub-layers in the dermis, which are the papillary dermis (thin layer) and the reticular dermis (thick layer). The papillary dermis acts as an “adhesive” holding the epidermis and dermis together, whereas the reticular dermis is made up of lymph and blood vessels, hair follicles, and sweat glands. In addition, this layer energize and nourishes the epidermis, and it is also crucial to healing, thermoregulation, as well as the sense of touch.

There are several stratified layers that make up the second layer, the epidermis. As a result of its layered appearance and protective function, it is also called stratified squamous epithelium. The epidermis consists of four distinct layers, or strata, which are arranged in a specific order according to their morphology: the basal, spinosum, granulosum, and corneum layers [3]. The epidermis consists of four kinds of cells, including melanocytes, Merkel cells, keratinocytes, and Langerhans’ cells. However, of all the cells, keratinocytes makes up the largest part (95%) in the epidermis and are chiefly responsible for constant skin renewal [3]. These cells are able to traverse for about 30 days from the basal layer (also known as the stratum basale) to the horny layer (stratum corneum) because of its ability to divide and differentiate. During the traversing period, the basal layer divides, thereby resulting in the production of the keratinocytes, and then the produced keratinocytes traverse through the next layers, which facilitates the transformation of their biochemistry and morphology. Consequently, corneocytes, the cells that make up the outermost layer of the epidermis, are flattened, nuclei free, and keratin filled. The last step in the differentiation process is desquamation, which occurs when the corneocytes lose their cohesiveness and detach from the surface. The skin’s continual regeneration happens through this process. Mechanosensory receptors, Merkel cells that likely originate from keratinocytes, respond to touch by forming tight connections with sensory nerve terminals [3]. Thus, Langerhans’ cells are dendritic cells that deposit themselves in local lymph nodes upon detecting foreign entities (antigens) that have penetrated the epidermis.

Among all the different kinds of skin cells, we are interested in the melanocytes. The melanocytes are dendritic cells located in the basal layer of the epidermis [3]. The melanocytes, in contrast to the Langerhans’ cells, have the ability to produce packages of melanin pigment, which are then distributed to their surrounding keratinocytes through the dendrites. The function of the melanin goes beyond the protection of the subcutaneous tissue from damage caused by ultra violet radiation, but it also include the pigmentation of skin, eyes, and hair. When there is an increase in ultraviolet radiation, more melanin is produced by the melanocytes, and this is why the skin gets tanned when exposed to sun.

However, the aesthetic outcome of melanocyte activity is not the centre of focus, but their malignant transformation potential. Although the cancer can occur in any cell within the body, there are specific cells that are more prone to being attacked by cancer, including the skin. The development of a large number of skin cancers occurs from squamous and non-pigmented keratinocytes. The outcomes of such transformations are squamous and basal cell carcinoma, respectively [1,4]. Nevertheless, when melanocytes are exposed to malignant transformation, the outcome is a rare, but more harmful and aggressive cancer called malignant melanoma. In the subsequent sections, the treatment and epidemiology of this kind of cancer is presented, alongside certain lesions that are regarded as its signs.

2.1 Pigmented skin lesions

The growth of melanocytes in clusters together with normal cells, melanocytic nevi, or pigmented skin lesions emerge on the skin’s surface [1]; they are usually regarded as a normal part of the skin. The following include the commonly existing benign PSLs:

  • Freckle or ephelis: Appears as macular lesion that is pale-brown in colour, less than 3 mm diameter, and characterized by lateral margin that is not properly defined. This freckle appears and darkens on parts of the skin that have been exposed to UV light [5].

  • Typically, a common nevus is a flat melanocytic mole.

  • A birthmark, or congenital nevus, is a mole that develops on a person at the time of birth.

  • Atypical or dysplastic nevus: This type of nevus is frequent and is characterized by scale-like texture, uneven pigmentation, rough edges, and unclear borders [1]. If a person has a high number of atypical nevi or maybe hereditary melanomas, they are considered to have atypical mole syndrome called “familial atypical multiple mole melanoma” or dysplastic nevus syndrome. In comparison to people with fewer nevi, those with more nevi are six to ten times more likely to acquire melanoma [5].

  • A blue nevus is a kind of melanocytic nevus that originates in the dermis rather than the dermoepidermal junction and is composed of a group of benign, aberrant pigment-producing melanocytes [5]. The phenomenon of light-reflecting off deep dermal melanin gives the skin its characteristic blue or blue-black hue.

  • The uncommon benign nevus known as pigmented spitz can be difficult to distinguish from melanoma [5]. It typically manifests in youngsters.

The recognized precursors of malignant melanoma include the aforementioned lesions, with the exception of acquired dysplastic nevi and inherited dysplastic nevi [6].

2.2 Types of skin cancers

The detection of skin lesions is made difficult by the existence of a wide range of malignant and benign melanoma. There are three noticeable types of skin cell abnormality associated with skin cancer: squamous cell carcinoma (SCC), basic cell carcinoma, and melanoma [7]. In addition to these three, three more abnormal cells have been highlighted by the skin cancer foundation (SCF) [2], including, Merkel cell, actinic keratosis, and atypical moles which are rare. The six kinds of aforementioned skin lesions are illustrated. It has also been found that, after melanoma cases, the most harmful cells are the atypical moles.

The Skin Cancer Organization has outlined the key distinctions between the aberrant tissues:

  • A skin growth that is scaly and crusty is known as actinic keratosis or sun keratosis. This anomaly is considered pre-cancerous since, and if left untreated, it has the potential to develop into skin cancer.

  • Although rare, Merkel cell carcinoma is highly aggressive when it manifests on the skin. A very significant risk of systemic dissemination and recurrence exists. On the other hand, melanoma is forty times more common.

  • The most prevalent type of skin cancer is basic cell carcinoma. Red areas, open sores, pink growths, and glossy lumps or scars are the hallmarks of this condition. However, the chances of cancer spreading to other areas of the body are quite low.

  • Warts, scaly red patches, raised growths with central depression, and open sores are all symptoms of SCC. This is one of the second most prevalent cancers.

  • Atypical moles are typically benign moles that look abnormal; they are also referred to as dysplastic nevi. They share resemblance with melanoma, and individuals with such moles have higher risk of developing melanoma at a mole or at any other part of their body. Here, the risk of developing melanoma is 10 times or more.

  • Melanoma has been found to be the most harmful kind of skin cancer, which emerges in form of growths that develops when the skin cells are damaged by unpaired DNA. This is normally developed as a result of UV radiation from sunshine or tanning; all these lead to the emergence of mutation which causes the fast multiplication of skin cells and formation of malignant tumours. Melanomas come in different colours like blue, red, purple, pink, or white, but most of them appear to be brown or black. Early detection and treatment of melanoma can prevent its advancement, and otherwise, it can progress and spread to other parts of the body, thereby making its treatment more complex.

Damage to skin tissue by UV radiations is the leading cause of skin cancer, according to researchers. Sun damage, specifically UV-induced skin cancer, is the primary cause of malignant malignancies [8].

2.3 Medical diagnosis

There are some signs that have been highlighted by dermatologists based on every type of tumour so as to facilitate the detection of melanoma. These signs have been arrived at through comprehensive analysis and comparison of the various signs of each tumour. Basically, the detection of melanoma is achieved by a dermatologist based on two scoring systems that employ the use of visual characteristics. The two scoring systems, which have gained recognition among medical specialists include the ABCDE rule [9] and Glascow seven-point checklist/seven-point checklist [10]. More so, the scoring systems are essential constituents of the principle methods of comparison used by experts for image processing. Many more methods have been developed and studied based on these systems; for example, the ABCD rule has been suggested by Bareiro Paniagua et al. [11] and She et al. [12] and the seven-point checklist by Argenziano et al. [13]. Melanoma detection also makes advantage of aspects that are investigated independently in these systems, such as pigment network [14] and vascular anatomy [15].

2.3.1 ABCD rule

When an emerging characteristic is present, the ABCD rule can alternatively be referred to as the ABCDE rule, according to the SCF. What follows is an explanation and example of the acronym:

  • Asymmetry (A): When a single line, either horizontal or vertical, divides a mole of pigment in half, we say that the pigment is asymmetrical.

  • Border (B): If the pigment’s border is not smooth but rather rough, it might be a sign of malignancy.

  • Colour (C): Brown is the most common hue for benign pigments. There are different colours in which melanoma may appear, including blue, red, black or white.

  • Diameter (D): As compared with malignant pigments, the diameter of benign pigments is often smaller. Melanomas often have a diameter of more than 6 mm, but they can be smaller.

  • Evolving (E): Benign pigment, as time passes, changes in terms of colour, shape, size, and/or elevation.

2.3.2 Seven-point checklist

Another scoring method used in this context is the Glasgow seven-point checklist system [16] and the seven-point checklist [17], which were introduced by Capdehourat et al. [18] and Korotkov et al. [19], respectively. The scoring system has been developed based on seven benchmarks that have been categorized into two broad sets. The main benchmarks for the Glascow seven-point checklist include, colour, size, and shape, whereas the minor benchmarks include crusting, inflammation, and sensory changes. The seven-point checklist is more thorough and operates similarly; nevertheless, there are a few key differences, as follows:

  • Thick, uneven line segments in the tumour characterized by an abnormal pigment network; the lines may be grey, brown, or black in colour.

  • Blue-whitish veil: uneven, confluent, greyish-blue to whitish-blue diffuse pigmentation that may be associated with streaks in the pigment network, globules or spots, or transformation.

  • An unusual pattern of blood vessels: red ones that are spotted and/or irregular and blend in with the surrounding regression region.

  • Centrifugal streams of abnormalities or pseudo pods near the edge of the lesion are known as irregular streaks.

  • Areas with unevenly distributed and/or shaped pigmentation, such as brown, black, or grey patches. Distributed throughout the lesion are irregularity structures of varying sizes, which manifest as black, brown, or grey globules or spots.

  • Areas depicting a regression: white scars and/or blue peppers (grey blue areas and/or many blue-grey dots).

3 The pre-processing stage: image enhancement

Improving the quality and legibility of images is the primary goal of image enhancement. In the literature, this procedure is sometimes called image pre-processing. This step is implemented on images of skin cancer to enable the elimination of artefacts like hair as well as artificial artefacts. Through the process of image enhancement, the quality of contrast is improved and so the details can be seen clearly during exploration. Thus, several techniques and algorithms have been introduced and explored on different databases of skin lesion with the aim of compensating the inadequacy of image acquisition while eliminating a wide range of artefacts. The artefacts of dermoscopic images come in the form of black frames, unequal illumination, dermoscopic gel, air bubbles, rulers, and ink markings. More so, natural cutaneous characteristics like hairs, blood vessels, and skin lines can affect the detection of border [20]. Koroktov [21] provided two classes of artefacts including image enhancement and artefacts rejection. Artefacts rejection is made up of ink marking, air bubbles, hair, and specular reflections. On the other hand, the image enhancement includes correction of illumination, enhancements of edge and contrast, as well as correction and calibration of colour. Celebi et al. [22] focused on the relevance of the current step in detection of border. In their work, they placed emphasis of Gaussian and Median filters as the most relevant methods.

The artefact which is commonly found in dermoscopic images is the hair, which must be eliminated. For this reason, there are several algorithms and approaches that have been introduced to enable the removal of hair that is not shaved prior to the image acquisition step. Most of the methods and approaches used in removing hairs involve two major steps including:

  • The hair detection: During this process, the various hairs present on the image are detected and removed through the detection algorithms. In the majority of the approaches used for this purpose, segmentation is used since the hair is one of the most important component among the other kinds of noises.

  • The image restoration: During the process of restoration, which is also referred to as inpainting step, the space that was previously occupied by the eliminated hairs is filled with proper colour and intensity values. If the hair density is particularly high around the lesion’s borders, it could degrade the image quality. Furthermore, in certain cases, the texture of the pigment might be impacted by the dense hair. Thus, it is important to shave the hair at the affected area so that the diagnostic errors can be minimized.

There is a wide range of approaches that can be explored and used for removal of hair. However, the first of such approaches known as the DullRazor software [23] was introduced in 1997, and it gained the attention of many as it was widely used for hair removal [11]. Using a morphological closure operation, the DullRazor can pinpoint the exact position of dark hair. In addition to checking the hair pixel’s form, it employs bilinear interpolation and the median filter to make the restored pixels as smooth as possible. The DullRazor software was further improved by Wighton et al. [24] in 2008 by adding the inpainting step to the software. Apart from the interpolation which is utilized in the DullRazor software, more details were explored by the authors. One of such details explored is the border direction, which was done by using Laplacian for the control and measurement of the smoothness. In the study by Kiani and Sharafat [25] in 2011, an improved version of the DullRazor was proposed through the use of a wide range of methods. In their work, they employed the use of Prewitt filter to facilitate the detection of borders. They also introduced the use of Radon transform for predominant hair direction, and they also achieved the separation of hair from other noises by using a variety of masks.

Similarly, Toossi et al. [26] proposed a morphological operator which was equipped with adaptive canny edge detector to facilitate the detection of hair. In addition, they added the multiresolution coherence transform inpainting method for repair and flawless replacement of the emplacements of the hairs that have been removed. Their proposed method is tested on a database containing 50 images, and a diagnostic accuracy of 88.3% was achieved, whereas error segmentation of 9.9% was achieved. Nguyen et al. [27] used universal matched filtering kernel as well as local entropy thresholding in obtaining binary hair mask. Thus, the authors were able to refine and verify hair masks by combining Gaussian curve fitting with morphological thinning. Xie et al. [28] mainly focused on the repair of eliminated hair using PDE-based image inpainting. The three main components of their suggested procedure are as follows. In the first stage, enhancement, morphological closure is used. Then, in the second stage, hair segmentation, statistical thresholding is used and extracted by elongating related areas. Finally, in the third and final stage, PDE-based image inpainting is used to restore the image. Eighty photos were used to test the suggested method; 40 of those images included hairs and 40 did not. Their findings revealed a 5% mistake rate for the hairless group and an 18% error rate for the hairy photos. In a similar vein, Fiorese et al. [29] suggested a PDE-based image inpainting to facilitate restoration. A top-hat operator was provided for morphological improvement with the PDE-based image inpainting, and an Otzu threshold was used for hair detection. They discovered a 15.6% error rate when they tried their suggested strategy on 20 photos. Similar to how linear discriminant analysis (DA) helped with image restoration, multi-scale curvilinear matched filtering was utilized by Huang et al. [30] to identify hair. To accomplish hair identification, Abbas et al. [31] suggested a matched filtering method that incorporates the first derivative-of-Gaussian algorithm. Although the approach obtained improved accuracy, its implementation is complicated owing to the multiple factors it employs, according to their results. A diagnostic accuracy rate of 93.3% was recorded when the method was implemented on 100 dermoscopic images. In their next studies [32,33] which required the removal of hair, the same method was used. For the detection of hair, the use of bank of directional filters was employed by Barata et al. [34], while inpainting was done using PDE-based interpolation.

Gómez et al. [35] presented an unsupervised approach based on independent histogram Pursuit. To enhance the image particular structures, their suggested technique estimates a set of linear combinations of image bands. Their method was able to produce greater quality boundary identification, as shown by their testing findings. Celebi et al. [36] have attempted to improve contrast by suggesting a method for a specific RGB image input. Their goal in using the histogram bi-modality was to make the lesion stand out more against the backdrop. Madooei et al. [37] conducted research with an emphasis on improving images and removing artefacts. On the basis of the influence of light intensity on the edges, their studies were conducted on a set of 120. When applied to improved border identification, the experience yielded a sensitivity rate of 92% and a specificity rate of 88%. The recent work by Koehoorn et al. [38] suggests a new method that combines thresholding set decomposition with morphological analysis that use multiscale skeletons for gap identification. In the line of detection of hair in dermoscopic images, a new approach was proposed by Mizaarlian et al. [39]. The proposed method makes use of measurement of turbulence quaternation [40] alongside dual matched filter to suppress hair. Also, in their proposed technique, the interpolation which is employed in DullRazor software introduced by the Lee et al. [23] is applied for restoration. They tested the performance of their proposed method using a database which contains 94 synthetic images and 40 dermoscopic images. Their experimental results for hair segmentation showed that an accuracy rate of 86% was achieved for the dermoscopic images, while that of synthetic images was 85%.

There is a wide range of techniques that can be used for the removal of other artefacts like blood vessels and capillary. Using two colour parameter characteristics and a small collection of one curvilinear, Huang et al. [41] extracted capillaries from skin lesions. To train the system to identify the various capillaries, a SVM classifier was utilized. The suggested approach was evaluated using a database including 49 photos, with 28 of those images depicting invisible capillaries and 21 depicting visible ones. Their approach achieved the best possible scores in terms of accuracy (98.8%), sensitivity (90.5%), and specificity (89.3), according to the findings of their studies. Argenziano et al. [13] detailed the numerous vascular structures and their association with the different types of cancers, including both melanocytic and non-melanocytic tumours. Fisher test and χ 2 tests were among the statistical methods employed by the writers. Dot vessels exhibited a 90% positive prediction on melanocytic lesions when their approach was evaluated on a database of 531 photos. The use of Fourier transform was employed by Abbas et al. [32] to achieve the reduction of specular reflection, while the reduction of dermoscopic gel or air bubbles was achieved by means of median filter. In the work done by Barata et al. [34], the reflection was minimized using a sub-band thresholding so that the quality of the images can be enhanced in pigment network detection.

The summary of number of techniques that have been employed in the detection of hair alongside the steps of used in achieving inpainting. Also, the results obtained for each of the techniques are also presented in the table. The results of majority of the methods presented in Table 1 have been compared with the results of the DullRazor software. Regardless of the fact that the contributions made by scholars in the area of hair segmentation and removal are huge, only few researchers have focused on the analysis of tubular vessels. The majority of the results in these publications are centred on visual analysis and comparison. It was also observed that most of the approaches proposed for the enhancement of skin image are based on thresholding. Even though, there are several methods that have been developed, and the use of small and private databases to test these methods makes it difficult to conclude on the actual performances of the approaches.

Table 1

Summary of several techniques used for segmentation stage

Authors/Year Segmentation methods Dataset used Outcomes Remarks
Zortea et al. (2017) [43] Simple weighted thresholding 152 images The current outcomes propose that the strategy possibly can be utilized effectively to segmented melanomas and atypical nevi in lesions with a profoundly heterogeneous foundation skin A good and robust segmentation technique for skin cancer is used. A new intensity map used to distinguish the lesions from background skin part. Also, multiple thresholding framework aid to make balanced through the classes of the images. furthermore segmentation method based on Otsu’s functional
Dalila et al. (2017) [44] Ant colony optimization 172 dermoscopic images The outcomes of the segmentation method proposed is empowering as they gave promising outcomes. The best 12 features appear to be adequate to identify cancer melanoma. Besides, ANN gives superior outcomes than KNN algorithm Proposed an approach to distinguish dangerous melanoma by using ant colony optimization for automatic segmentation and textures features computing relative colour and first-order histogram. The features extraction are 112 and selected features is 12 in this study. Also, two ML classifiers used are neural network (NN) and K nearest neighbour (KNN)
Rehman et al. (2018) [45] Generalized Gaussian distribution 900 images The obtained results in this study achieved accuracy of 98.32%, sensitivity of 98.15%, and specificity of 98.41% The CNN architecture for detect melanoma cancer have been used in this study
Yang et al. (2018) [46] Random forest and full CNNs 379 dermoscopic images The obtained outcomes appear that the hybrid of random forest and FCN output way better exhibitions than utilizing RF alone, in specific, can increment the sensitivity by almost 20% To enhance the feature extraction process, this study proposed a strategy of segmentation of pigmented lesions using random forest and full CNN
Moradi and Mahdavi-Amiri (2019) [47] Kernel sparse representation 1,479 images They tested the proposed approach for the segmentation and classification processes. The assessment outcomes on both datasets illustrate the proposed approach to be competitive as compared to the accessible state-of-the-art strategies, with the advantage of not requiring any pre-processing A automatic approach for skin lesion segmentation and classification using dictionary learning and kernel sparse representation
Vasconcelos et al. (2019) [48] Morphological method and geodesic active contour 200 dermoscopic images The MGAC approach appeared great outcomes in all similitude measurements compared in the study such as Dice coefficient (92.09%), Jaccard index (86.16%), and Matthew correlation coefficient (87.52%), additionally accomplishes great outcomes in accuracy (94.59%), sensitivity (91.72%), F-measure (93.82%), and specificity (97.99%) The suggested solution is faster than strategies that use DL methods since it does not require any training step. Since the blue channel of the dermoscopic image contains many important subtle aspects of the lesion, the technique may effectively adapt to its contours by using it as if it were a colour channel
Tang et al. (2019) [49] Separable-Unet with stochastic weight averaging 3 dataset The proposed approach achieved a good accuracy in the 3 dataset This study for the primary time depicts the over-fitted issue of simple illustrations prevailing as an issue of falling terrible neighbourhood optima, and present the stochastic weights averaging strategy to fathom the issue, which can encourage boost our demonstrate to get SOTA execution
Ibrahim et al. (2020) [50] Gaussian filter based on 2D wavelet and-2D inverse wavelet transformation methods 133 images The segmentation results obtained 97.75% of accuracy for these two types of skin cancer lesions The aim is to test the accuracy of the proposed segmentation method
Liu et al. (2020) [51] Novel CNN method 593 images The obtained results presented CNN model achieved a good performance compared with previous works with accuracy of 94.32%, Jaccard index of 79.46%, and sensitivity of 88.76% A multi-task learning system presented for skin lesion segmentation process to enhance the accuracy of skin lesion classification and identification
Abd et al. (2020) [52] New swarm intelligence 600 images The proposed method achieved average Jac value of (96.98, 94.12, 88.00 and 90.00), JSI of (94.55, 89.11, 76.27 and 79.88) Optimized features by artificial bee colony segmentation method
Murugan et al. (2021) [53] Two primary methods for segmentation are employed in the study: 1,000 images In computer vision applications, the system focuses on detecting skin cancer using skin colour alone. Following the segmentation process, the images are treated to feature extraction utilizing GLCM, moment invariants, and GLRLM features. Random forests, probabilistic neural networks, and SVMs are utilized in classification methods. The approaches employ median filtering in conjunction with mean shift segmentation to prepare the skin images for segmentation
Median Screen: All starts with applying a median filter to the skin images When identifying the extracted features, the study found that the hybrid SVM + RF classifier performed better than the others Many features, including GLCM, moment invariants, and GLRLM, are utilized in feature extraction
Following the filtering process, the images are subjected to mean shift segmentation According to the research, the combined SVM + RF classifier is superior to using the retrieved features alone to diagnose skin cancer
Araújo et al. (2021) [54] The segmentation effectiveness is enhanced by employing a modified U-net network and post-processing techniques in combination Two public datasets: PH2 and DermIS Melanoma lesion segmentation is much improved by combining the suggested modified U-net network with post-processing approaches There is a lot of hope for the suggested approach compared to previous high-performance techniques published
Reis et al. (2022) [55] The U-Net approach employs an ultimately linked layer-free, quick, and accurate architecture to segment tiny medical datasets They evaluate InSiNet’s functionality on three datasets: ISIC 2018 (HAM10000 images), ISIC 2019 (images), and ISIC 2020 (images) from the International Skin Imaging Collaboration Many more ML methods are compared regarding computation time and accuracy. These methods include DenseNet-201, GoogleNet, EfficientNetB0, RBF-SVM, logistic regression, and random forest DL algorithms, such as InSiNet, are proposed as a reliable method for diagnosing skin lesions, complementing traditional methods
With accuracy rates of 94.59% in the 2018 dataset, 91.89% in the 2019 dataset, and 90.54% in the 2020 dataset, InSiNet is the most effective approach
Houssein et al. (2022) [56] Multilevel thresholding image segmentation Over 12,500 images The Opposition-Based Golden Jackal Optimizer (IGJO) is a more effective variant of the GJO algorithm. With Otsu’s approach as the goal function, IGJO addresses the multilevel thresholding issue While compared to other algorithms, the suggested IGJO method achieves better results in the experiments while measuring segmentation metrics like PSNR, SSIM, FSIM, and MSE. When it comes to fixing the segmentation issue, the algorithm shines
Ahammed et al. (2022) [57] To isolate skin lesions, it employs the Grabcut method of automated segmentation Training, testing, and evaluation all made use of two famous datasets: “HAM10000 (Human-Against-Machine with 10,000 training images)” and the “International Skin Imaging Collaboration (ISIC) 2019” challenge dataset. When compared to DT and KNN, SVM’s performance is marginally higher They go over recent developments in artificial intelligence (AI) systems that use digital images to identify skin cancer, as well as some of the obstacles and potential solutions to these problems, all aiming to assist dermatologists in their work better
The research compares its findings to those of cutting-edge methodologies to gauge effectiveness
Kaur et al. (2022) [58] The ever-changing nature of lesions makes prediction difficult and necessitates laborious manual segmentation of lesion locations The ISIC has supplied three benchmark datasets – ISIC 2016, ISIC 2017, and ISIC 2018 – that the network is tested on Building on the success of atrous convolutions in semantic segmentation, we provide a new CNN architecture for automated lesion segmentation The outcomes outperform other cutting-edge approaches and the top three finishers in the ISIC competition
Convolutional layers, batch normalization, a leakyReLU layer, and hyperparameters that have been fine-tuned are some of the essential components that the design uses to enhance performance The model can extract lesions from the entire image without pre-processing in a single pass
Olayah et al. (2023) [59] Dermoscopic images are segmented to identify lesions and separate them from healthy skin using the geometric active contour method There are 25,331 dermatoscopic images in the ISIC 2019 collection, with melanocytic and non-melanocytic skin types being one of eight classifications This hybrid model of AlexNet, GoogleLeNet, VGG16, and ANN was able to accomplish the following goals: The appropriate results are as follows: accuracy: 96.10%, AUC: 94.41%, sensitivity: 88.90%, specificity: 99.44%, and precision: 88.69% When identifying dermatoscopic images from the ISIC 2019 dataset, hybrid models that incorporate fused CNN features provide encouraging results
Ghosh et al. (2024) [60] Otsu’s thresholding 3,000 images The usefulness of the hybrid model in extracting and classifying features is shown by preliminary data indicating it is superior to individual models The research focused on using DL and AI to accurately detect and classify skin cancer, in the early stages
Himel et al. (2024) [61] Segment anything model A total of 10,015 skin lesion images with detailed annotations are part of the HAM10000 dataset that is used in the study Regarding skin cancer categorization, the ViT, especially the ViT patch-32 variation developed by Google, reaches an impressive accuracy of 96.15%. For accurate segmentation with high IOU and Dice Coefficient, the “Segment Anything Model” is a lifesaver According to the research, doctors may find the ViT patch-32 variation developed by Google to be a valuable tool for diagnosing skin cancer
Hu et al. (2024) [62] The proposed segmentation method is LeaNet, a novel U-shaped network for high-performance yet lightweight skin cancer image segmentation On the ISIC2017 and ISIC2018 datasets, the model is evaluated Concerning ACC, SEN, and SPEC, LeaNet outperforms big models like ResUNet by 1.09, 2.58, and 1.6%, respectively. These enhancements are accomplished using LeaNet while the model’s computational complexity and number of parameters are reduced by 1,182× and 570×, respectively The problem of delivering lightweight, high-performance skin cancer image segmentation is one that LeaNet aims to solve. Comparing the model to current big and lightweight models produces better results in segmentation measures

4 Segmentation

In terms of classification and diagnosis, it is important for the border of skin lesion to be accurately detected. There is a wide range of algorithms and techniques that have been developed and used on a variety of databases in the field of image processing. The detection of border is not as easy as one may think, as it is accompanied by some many complexities and limitations [21]. There are two key problems associated with the process of border detection; the first is the problem of ground truth presented treated by dermatologists, which computers find challenging to distinguish and humans find tough to replicate. In such a situation, the human eye is unable to detect or adequately investigate the blur or contrasts [42]. A second issue, as detailed in the work by Celebi et al. [22], pertains to the manual or automated segmentation of the morphological structure of the lesion. This is especially the case when the lesion’s border is indistinct or when there is little contrast between the tumour and normal skin. Selecting the most appropriate technique or approach for the detection of a lesion’s border is made difficult by the numerous development of lesion and how it appears in dermoscopic images [22].

Thus, the discussions in the current subsection are presented based on three main approaches that are generally used in the area of image segmentation, especially, for images of kin lesion. The first category is the technique that make use of total variation segmentation, then the second set consists of the techniques the employ the use of multi-resolution analysis, and the last category consists of the methods in which thresholding approaches are used. The discussion also includes approaches that do not fall into any of the aforementioned methods. In recent times, the detection of border lesions was reviewed by Celebi et al. [63], and afterwards, he categorized into 12 classes including active contours, clustering, and histogram thresholding.

4.1 Total variation segmentation

Segmentation of skin cancer has been approached with a great deal of creativity and innovation, as seen in the image processing literature. The segmentation technique was carried out by Abbas et al. [31] using total variation regularization. They used a variant of region-based active contours first proposed by Lankton and Tannenbaum [64] in their study. Their approach’s underlying idea is comparable to the Chan and Vese model [65] and its extension [66] suggested for solving the Mumford and Shah function [67]. Experimental findings demonstrated a true detection rate (TDR) of over 90% and a false positive rate (FPR) of 10% when the method’s performance was evaluated on a database consisting of 320 photos. Similarly, Li et al. [68] adopted a different extension of the Chan and Vese Model in their study, and Safi et al. [69] followed suit. The authors employed a SVM classifier for classification and the ABCD rule for feature extraction. Using a database consisting of 4,240 benign moles and 232 malignant ones, the method’s efficacy was assessed. Using a 10-fold cross-validation as part of this validation, we were able to attain a TDR of over 98% for all 10 instances tested. Recently, Kang and March [70] introduced another extended version of the model. In 2009, a comparison of six segmentation techniques was done by Silviera et al. [71]. The compared techniques include, fuzzy-based split-and merge, level set of Chan and Vese model (C-LS), adaptive thresholding (AT), adaptive snake (AS), gradient vector flow, and expectation-minimization level set. Based on their comparison, the CLS method yielded the best result of 2.55% for FPR, but a lower rate of 83.39% was achieved for the TDR compared to that achieved by the AS (95.47%). The Chan and Vese model was extended in 2015 for application on dermoscopic images [72].

4.2 Multi-resolution analysis

In the same vein, lesions on pigmented skin can be segmented using multi-resolution analysis. Castillejos et al. [73] performed the task of segmentation by using a combination of fuzzy K-means clustering algorithm with wavelet transform, fuzzy C-means algorithm and cluster pre-selection algorithm [74,75,76]. In their work, the use of all colour channel was employed in segmenting the images. The performance of the method in terms of diagnostic accuracy was evaluated based on the AUC measure using a database of 50 images. It was observed that the Daubechies wavelet produced the best AUC result of over 0.96 for the three combinations. Similarly, the use of wavelet decomposition banks [77] was employed by Ma et al. [75,76] to distinguish between non-melanoma and melanoma cases, and the classification was done through the use of ANN classifier. The system was evaluated using a database of 134 skin lesion images, of which, 62 are benign lesions and 72 and melanoma lesions. The results showed that their system achieved wavelet [78] and curvelet transforms [79] on the basis of their efficiency to segment and identify melanoma. Their evaluation involved the use of two layers back-propagation neural network classifier, and it was applied on a database on 448 digital images of skin lesion. Based on their results, the curvelet outperformed the wavelet transform, as an accuracy rate of 86.57% was recorded for curvelet transform, and a lower rate of 58.44% for the wavelet transform. In an effort to facilitate the detection of skin lesions’ border under 20 iterations, the gradient vector flow was proposed by Erkol et al. [80]. The database used in evaluating the performance of the method contains 100 dermoscopic images, of which 70 are benign lesions and 30 are malignant lesions. A relatively low error rate of 13.77% was obtained.

4.3 Thresholding approaches

Thresholding approaches are the simplest segmentation methods. Basically, the approaches involve separating the image based on certain limits used on grey scale into binary image. Combinations with other methods, such as morphological operators, are commonplace when these approaches are used. One of the most well-known methods is the thresholding techniques by morphological operators segmentation, which is used in the border detection region of several imaging databases [81,82,83,84,85,86]. Skin cancer images were segmented using a morphological segmentation approach by Ganster et al. [87]. For segmentation, they employed a grey-scale morphology that is derived from a hybrid of two algorithms: global thresholding and dynamic thresholding. After applying the blue colour channel of RGB and CIE-Lab colour space to a database containing 4,000 lesion images, 159 of those photos were deemed discarded because of segmentation failure. The rate of segmentation accuracy for images with skin lesions recorded by their method is about 96%. In an early study by Schmid [88], the detection of dermoscopic legion was done using morphological flooding and anisotrophic diffusion. When compared to other approaches, the findings obtained from using AT in the study by Silveira et al. [71] were more optimistic with regard to of FPR and TDR. Their approach outperformed the gradient vector flow technique for detecting both benign and malignant melanoma with a lower false detection rate and a higher actual detection rate. Still, it was virtually identical to other approaches’ outputs. Using 90 dermoscopic images – 67 of which were benign and 23 of which were malignant – Emre Celebi et al. [86] demonstrated an automated method that combined thresholding with a Markov random field and assessed its performance in 2013. After comparing the suggested strategy with other ways, the outcome was presented with exclusive-OR errors of 9.16 ± 5.21%. One such method is the Otsu thresholding segmentation proposed by Schmid [88], which has been used by several researchers to detect borders automatically in dermoscopic images [89,90,91,92,93]. Normally, when this method is used for segmentation, it is used together with other methods. Similarly, a combined method was introduced by Abbas et al. [92], where a morphological reconstruction-based algorithm and Otsu thresholding algorithm were combined. The TDR yielded by the algorithm was 92.10%, whereas the FPR was 6.41%; the database that the algorithm was tested on is made up of 100 dermoscopic images.

4.4 Other segmentation approaches

Genetic algorithm developed by Deb et al. [94] was applied by Xie and Bovik [95], who combined two methods to produce an algorithm that can assist in segmenting dermoscopic images. The use of watershed technique [96] was employed by Wang et al. [97] in segmenting lesion. The use of database containing 100 skin lesions was employed in evaluating the performance of the algorithm, and a relatively low error rate of 15.98% was achieved. Zhou et al. [98] made use of a combined method for skin lesion segmentation. The method consists of anisotropic mean shift algorithm together with fuzzy C-means, and the former is based on a variant of fuzzy C-means. Gap-sensitive segmentation was used by Sobiecki et al. [99] on digital images of skin cancer. In the study conducted by Glaister et al. [100], the TDLS algorithm was used together with TD metric for the extraction of textural feature and computation of textural dissimilarity, respectively. The performance evaluation involved 126 images captured using a camera, and the results showed that a detection accuracy rate of 98.3% was achieved, while a specificity rate of 912% was achieved. Zhou et al. [101] proposed an unsupervised segmentation algorithm which uses k-means clustering under spatial constraints. Automatic melanoma segmentation was achieved by Qi et al. [102], who employed the use of completely deep constitutional neural network. The training and testing of the model involved the use of 2,000 training images and 600 testing images, respectively. Nevertheless, a visual illustration was given by the authors showing that no performance was achieved by their method, because of the small dataset which was used in training the algorithm. A summary of the numerous techniques used for images of skin lesion is presented in Table 1. Thus, this table highlights the method used in each paper as well as the obtained results.

Most of the approaches designed for the process of segmentation are based on traditional methodologies developed in the field image processing. Even though an accuracy of 98.57% was claimed by Safi et al. [69], who tested their method on a database of 4,472 images, the parameters used by the authors was not highlighted in their work, considering the model is a parameter-based expansion of the Chan and Vese model. It is worth noting that Clawson et al. [103] found a significant disparity in sensitivity between the ground truth of two experts (50 and 95.2%), even though they only used a database of 30 photos. Another finding is that the difference between benign and malignant tumours can impact a model’s accuracy. This highlights the critical need for research into a targeted method of skin lesion. Based on the literature, it is clear that re-implementing the different strategies was a challenge because most of the studied studies used private databases. In addition, it has been noted that the advancement of the segmentation stage for skin tumours has not received much attention for a while now. In the present work, the use of an extended version of Chan and Vese is employed in developing a technique that can be used to adequately and accurately segment skin cancer.

The problems with skin lesion segmentation and identification can be explained by the wide diversity of image types and sources. Skin location is a complex and challenging technique because of the enormous variation in the appearance of human skin colour. The following are some of the difficulties that might arise from the complicated visual features of skin injury images:

  1. Various shapes and sizes: The wide variety of skin diseases makes accurate skin cancer categorization extremely difficult and increases the level of difficulty of these images. Variation in lesion size, location, and form is rather high. Therefore, image pre-processing is an essential first step in most image analysis processes for skin cancer images to conduct accurate research.

  2. The presence of noise and artefacts: While shopping, you may see several items that are considered noise. Artefacts and noise can impact the ability to differentiate skin cancer images. These are depicted as potentially harmful signals that were not originally part of the image but can affect image translation using human tactics and does in fact impact some computer-assisted skin cancer classification and segmentation approaches. Hair relics, bubbles, blood vessels, and hair relics are all part of the illustrations.

  3. Some skin cancer images have hazy and uneven borders, which makes it difficult to use several approaches for localizing lesion boundaries and refining contours. Obtaining the precise boundary of a skin cancer image for basic asymmetry forecasting during the pre-processing step is typically problematic.

  4. Inadequate contrast: Additional difficulties might arise when there is insufficient contrast and differentiation from nearby tissues. Accurate cancer sectioning is difficult due to the lack of contrast and differentiation between the injured area and the surrounding skin.

  5. Colour illumination: A colour image of the lesion’s texture, reflections, and light beams can affect the brightness of the dermoscopic images, leading to images with multiple resolutions.

5 Feature extraction

For physicians to be able achieve a correct diagnosis of PSL, or to label it as “suspicious”), they rely mostly on the so-called lesion characteristics. These characteristics are method dependent. A particular feature of the ABCD rule is the asymmetry of a lesion, whereas one characteristic of pattern analysis is the pigmented network. The classification of a lesion using computerized PSL analysis is aimed at extracting critical features from the images and representing them in a manner that is understandable to the computer, i.e., using image-processing features. The word “features” is used in this literature for the sake of clarity, while the term “feature descriptors” is used to denote image-processing features. As can be seen from the literature, PSL feature extraction has been the subject of several investigations. However, very few of them offer any kind of overview or critique of the feature descriptors utilized by CAD programs. In particular, Umbaugh et al. [104] provided an explanation of a computer application in 1997 that automatically extracts and analyses PSL properties. Their suggested feature descriptors were classified as colour, spectral, binary object, and histogram characteristics. Binary object attributes include things like area, perimeter, and aspect ratios. Statistical measurements of grey level distribution and co-occurrence matrix features are included in the histogram features. Colour features were represented using metrics gotten from colour variations, colour transforms, and normalized colours. Finally, metrics obtained from the images’ Fourier transform were represented using spectral features. Apart from providing a review of CAD system development through research, Zagrouba and Barhoumi [105] provided a brief insight on the algorithms for feature selection. One of the most crucial steps before lesion classification is reducing the number of extracted feature descriptors, as this helps improve efficiency and accuracy. The result is a decrease in the computational expense of categorization. But you should not take it for granted just yet, as eliminating duplication among feature descriptions could reduce their discriminating value. Various sets of retrieved feature descriptors can be employed according to a feature selection technique that some researchers have established [106,107,108,109]. Maglogiannis and Doukas [20] provided a solid review of CAD systems and feature descriptors in 2009. Along with a list of traditional feature descriptors used in the literature, these writers offered insights on methodologies of PSL diagnosis. By applying a variety of feature selection algorithms to a single set of dermoscopic images, the authors were able to compare the performance of many classifiers. The results showed that the classifiers’ performance was significantly affected by the feature descriptors used performance, demonstrating the importance of feature descriptors in PSL computer analysis.

  • A more comprehensive system of classifying feature descriptors is suggested in this study. The extension links the feature descriptors with particular diagnosis method, differentiating between dermoscopic and clinical images, as well as differentiating references based on our literature classification. With this categorization, a reader can gain insight on the following: the extant methods in PSL feature description.

  • The differences between representing dermoscopic features and clinical features, and ultimately.

  • To compile an exhaustive list of references on the relevant descriptions.

The individual descriptors have been grouped into different groups along with other descriptors so as to make the presentation concise. By taking this tack, how each set of adjectives corresponds to the trait we were trying to characterize can be determined. While most authors noted in their research papers that a given feature was mimicked by the descriptor, others would describe a different feature using the descriptor or will even not link it to any specific feature. The tag “Lesion’s area & perimeter” makes this type of group very evident. The “Border irregularity/sharpness” features, which is present in the majority of published works, is ascribed to this set in this study, rather than as “Asymmetry” feature, as used by some authors [110,111]. It is noteworthy that the attribution of such group is a relatively complex task, given that as a shape or geometry parameter, it could be adequately used to provide the description of both features. When it came to the additional sets of feature descriptors, the same logic of majority was used, but a separate list was created containing the descriptors for which a particular clinical attribution could not be defined.

While the ABCDE factors were utilized for clinical photos, dermoscopic images were thought about using pattern analysis and the ABCD rule. The references of the works that were carried out for calculating descriptors for pattern analysis features [110]. Several of these focused on extracting features in accordance with the seven-point checklist approach to dermoscopy-based melanoma detection [112118]. Capdehourat et al. [18] carried out a preliminary study with the aim of detecting certain dermoscopic structures including irregular pigmentation, atypical pigmented network, and summaries of features’ descriptors employed regarding dermoscopy’s ABCD rule and the ABCDE clinical requirements. The commonalities are specified, and the contrasts are highlighted through this split depiction of the automated description of the features. The two types of images comprise the greatest groupings of feature descriptors. On the other hand, the differences are observed in smaller groups and in some cases individual descriptors. For instance, scale-invariant feature transform (SIFT) descriptors, size functions [119121], dermoscopic interest points [122], dermoscopic images are the only ones that employ the bag-of-features structure, while clinical images make use of a variety of methods for describing border abnormalities and a plethora of papers on the skin pattern analysis. While a set of textural feature descriptors known as Haralick parameters is rather big when applied to dermoscopic images, it is relatively tiny when applied to clinical images. This is because dermoscopic images provide more detail about the texture than macroscopic clinical images, which allows for more intricate examination [1,2,3].

6 Lesion classification

The lasts step involved in the schedule of automated examination of images representing one PSL is the classification of lesion. The output of lesion classification varies according to the kind of system; it could be ternary (nevus/common nevus, melanoma/dysplastic), binary (can detect cancerous or benign skin conditions), or nary, which may detect a wide variety of skin diseases. The system’s trained recognition of certain classes of PSLs is reflected in these outputs. The systems carry out the task of classification through the application of numerous methods of classification to feature descriptors that have been extracted in the previous stage. The selected classifier and extracted descriptors have a great influence on the efficiency of the methods. Thus, to obtain optimal results from the comparison of methods of classification, the same set of descriptors and dataset must be used for all approaches. A summary of classification results obtained by different authors of numerous CAD systems was presented in the work published by Maglogiannis and Doukas [20]. The authors also used a set of feature descriptors and a dataset containing 3,639 dermoscopic images to uniformly compare 11 classifier. This task also involved the application of various procedures of feature selection. The 11 classifiers used for the comparison comprised the most commonly used group of classifiers in the PSL computerized analysis, including DTs, regression analysis, and ANNs. Three sub-experiments were performed during the comparison. The initial two trials focused on the dysplastic/common nevus and melanoma/common nevus categories, respectively, but the third trial combined the two. Based on the comparison, the best result was yielded by SVM. However, a conclusion was drawn by the authors that the aspects that play critical roles in the classifiers’ performance were the procedure involved in learning as well as the chosen feature descriptors.

Several authors in the area of CAD systems and classification have done a comparison of two or more classifiers. Particularly, many authors have compared the performance of SVM and ANN [123], and their results revealed that SVM outperformed the ANN. Also, some authors have done a comparison between ANN and DA [124], while some have done a comparison between the SV and ANN [125] and have reported equivalent or slightly worse performance. Aljanabi et al. [126] carried out an evaluation of the performances of Bayesian classifier, SVM, k-nearest neighbours (kNN) and ANN [2]. The result showed that the ANN outperformed the Bayesian classier, while the Bayesian classifier performed better than the KNN algorithm. Although several of these comparisons have been done, it is difficult to ascertain the performances of the numerous classifiers used for PSLs in a hierarchical manner. This is because of the slight variation in the statistical evaluation results, as well as the basis on which the comparisons are performed; some of them are performed based on different features, classifier parameters, image datasets, and different learning procedures. However, a relative method of evaluation was used by Manne et al. [123], who reported the performances of the classifiers based on performance scales they created (performing well, very well, or not well suited). So their results showed that the kNN was rated “performing well,” then SVM, ANN, and logistic regression were rated “very well,” and the DT was rated as “not well suited” because of continuous input variables.

The research is categorized into three groups, focusing on studies that have used, developed, or tested methods for diagnosing PSL using clinical or dermoscopic images. Notably, most studies in the “Classification” category emphasize the specific details of the proposed lesion classification methods. On the other hand, the two other categories consist of researches in which the use of one of methods of classification has been employed in analyzing, proposing, or improving complete CAD systems. Thus, in general, the details provided on implementation of these systems in the literature are scanty, but the results of comparative performance are presented. In PSL CAD systems, classification does not rely solely on the raw image data but rather on how the extracted feature descriptors are interpreted. However, it can be argued that these descriptors inherently encode information based on the type of images used, making them unique to each imaging modality. Regardless of this difference, it is not possible to clearly classify grouping the feature description into two categories according to these images modalities; this is because the feature descriptor sets are rather similar.

The classification of the techniques was done based on their matching category, without considering any particular characteristics of implementation. For example, both ANN and DA are broad categories that include a number of different ways that might be considered “a type” of these more specific sets of approaches. Thus, Table 1 includes a few of the many publications that compare algorithms. By far the most used classification approach is ANN, followed by SVMs, DA, kNNs, and DTs. Kernel logistic partial least square regression and hidden Markov models are two other approaches that have been investigated and utilized according to the challenge. Supervised ML techniques clearly outperform their unsupervised counterparts, as seen in the table. Reasons for this include the complexity of the classifying issues and the wide variety of dermoscopy and medical features that might reveal whether a lesion is benign or cancerous. This means that the clinical and dermoscopy findings may be at odds with the biopsy-established diagnosis for a number of different sample lesions [126]. To train a classifier to identify these uncommon manifestations of cancerous tumours, this scenario often calls for the training/testing paradigm for the classification development. However, unsupervised learning approaches have shown encouraging results in understanding the association between PSL cancer and observable features [127].

7 CAD systems

An summary of the recent studies on the development and study of CAD diagnostic systems for PSLs is provided. Razmjooy et al. [127] were among the first researchers to publish articles, providing the summaries of advancements in the area of CAD usage for PSLs diagnosis; the two papers were published in 1995. Subsequently, other authors [128,129, 130] began to publish in the area of computer vision. Nevertheless, the majority of the articles were clinical research papers that focused on comparing the performances of different CAD systems. Usually, these authors present their comparative studies in tabular forms, highlighting the various characteristics of the systems like performance parameters (e.g. specificity, sensitivity) dataset’s size and distribution (malignant melanomas versus dysplastic nevi), methods of classification, type of image, and methods of classification. Even though such tables do not permit complete comparison between CAD systems, they enable the analysis and quantification of the various aspects of extant methods. Examples of these kinds of comparative analysis have been presented in previous studies [127,131,132]. Recently, there are some systems that have been developed for commercial computer-aided diagnosis of PSLs. There is sufficient literature on research works that have focused on the exploring and designing such commercial systems, and majority of this systems are developed on the basis of dermoscopy. There are a lot of whole sets of CAD systems that include dermoscopy equipment and analytical software. The majority of these CAD software programs are all-inclusive packages that include analysis software and acquisition instruments (dermoscopy).

The DB-Mipsr (Dell’Eva-Burroni Melanoma Image Processing Software) is one of the most cited CAD systems that is employed in the detection of melanoma. This system has a variety of names, which depend on the period it was developed, such as DEM-Mips, DB-DM-Mips, DBDermoMips, DM-Mips, and DDA-Mips. Based on the survey conducted by Vestergaard and Menzies on automated instruments for the diagnosis of cutaneous melanoma [75], drawing conclusions about the efficiency of the instruments is difficult because of the several classifiers that are used in the various studies, which vary by structure. Particularly, this is evident in two studies conducted by expert dermatologists [127,132]. The results of the study carried out in Mabrouk et al. [133] showed that higher accuracy was demonstrated by the classifier compared to that of expert clinicians, which only uses the epiluminescence approach. Nevertheless, in the other study, [134], a significantly low result was recorded for the system in terms of specificity. It is important to note that the same classifier (ANN) was utilized in the two studies. According to the authors, the results point to the huge variation in the ration of dysplastic nevi in the benign sets. Some of the systems that are available for commercial use include MelaFindr, SpectroShader, and MoleMate. In MelaFindr, images are acquired sing multispectral dermoscopy, which enables the acquisition of images 10 different spectral bands, ranging from blue (430 nm) to near infrared (950 nm) [135]. Siascopy (MoleMate – system) performs the analysis of information about the levels of melanin, haemoglobin and collagen contained in the skin. For the system to achieve this analysis, the wavelength combinations of the received light must be interpreted [136].

In the extant literature, the technical characteristics of digital dermoscopy analysis instruments have been overviewed and compared. The previous studies [137,138] have provided a summary of the microDerm, DB-Mips, Videocap, SolarScan, MoleMax II, and Dermogenius, while Zhou et al. [122] have summarized the overview and presented the comparison of microDerm, Fotoinder, and Dermogenius Ultra [122]. Zhou et al. [122] noted that expert dermoscopists/dermatologists have little or nothing to gain from the CAD systems that were reviewed in their study. In the study by Sevli [1], other systems have been described alongside their performance evaluation. Some of the CAD systems are designed in a manner that allows them perform the diagnosis of a PSL based on its physical similarities to images of lesions with known histopathology. These types of systems are referred to as content-based image retrieval (CBIR) systems, because it is solely aimed at searching a given database to identify images that look very much alike with a query image. This is carried out using a wide range of parameters that establish similarities between extracted lesion feature descriptors, including Euclidean distance, Bhattacharyya, or even Mahlanobis distance. Despite the existence of so many parameters, the selection of the most suitable one is determined by the nature of the feature descriptors. Therefore, in the works of Rahman and Bhattacharaya [139], the use of Euclidean distances and Bhattacharayya was employed for texture and colour features. On the other hand, the use of Manhattan distance was employed by Celebi and Aslandogan [107] for descriptors based on the lesion’s shape information. Although content-based retrieval systems can be evaluated by using so many measures, the most commonly used one is the precision-recall graph [139]. Currently, the results recorded for the retrieval of both dermoscopic [140] and clinical [141] images show that there is need for further improvement.

In the past two decades, prior to the introduction and adoption of digital imaging in medical practice in place of film photography, researchers were aware of the potential benefits that could be derived from its use in the area of dermatology [1]. Gradually, a large number of the benefits became manifest, including tele-diagnosis, objective documentation of skin lesions in a non-intrusive manner, creation of digital dermatological image archives, 3D reconstructions of clinical features of cutaneous lesions, as well as their quantitative descriptions. Even though, the automatic PSL diagnostic systems are accompanied by some flaws, they do not pose serious challenges because their most relevant functionality has been achieved. In this study, computerized analysis of dermatological images have been overviewed and presented based on certain aspects resultant from the integration of two different disciplines, which are computer vision and dermatology. Particularly, the overview hammers more on the following aspects:

  • The variance that exists between acquiring clinical and dermoscopic images of individual PSLs, which dependent on the visualization of the structural details of a lesion. This is noteworthy during the application of border detection, pre-processing, or even feature detection algorithms to skin lesions’ images. More so, there is an association between the constant variations in the terminology used in the literature and the basic variance between the two modes of acquisition. Consequently, sometimes in the literature of computer vision, there has been wrong attribution of clinical methods of diagnosis to image types.

  • Clear distinction of researches in which images of multiple and individual pigmented skin lesions have been analyzed. The inconsistency in the number of researches carried out on each subject is huge. This could be attributed to the fact that the adoption of total body skin imaging is minimal. There is a huge discrepancy on the tradeoff between the relevance of total body skin imaging in the detection of melanoma versus logistic limitations, and the cost implication associated with the use of this approach. [3]. Hence, there is higher preference for automated approaches to individual lesion analysis compared to that of total body screening.

  • Generally, in terms of the analysis of images showing individual PSLs, the focus has been on the development of computer-aided diagnosis systems that are designed to automatically detect melanoma from both dermoscopic and clinical images. Generally, the workflow of most of these systems is the same, including pre-processing of images, lesion borders’ detection, and extracting and classifying clinical feature descriptors of a lesion. There are different methods that have been proposed to facilitate the implementation of all the steps involved in this workflow. However, of all the steps, the feature extraction and border detection steps have the highest number of articles published about them.

  • There are limited publications on automated change detection for multiple and individual PSL images. Even though the major sign that points at the early stage of melanoma is the fast change in the size and lesion morphology, it can be seen in the literature that there is scarcity of researches in which the complete application of automated change assessment is made.

  • In addition, in this work, several categories of computerized analysis of dermatological images have been created. Within the boundaries of this classification, a review of the categories that make up the workflow of a typical CAD system have been carried out and summarized in a tabular form. The summary shows the pre-processing methods used, techniques of extracting features, as well as the approaches used in classifying the PSL images. One more significant impact of this review is the extensive classification of extant dermoscopic and clinical feature descriptors. These descriptors have been categorized according to their relatedness to particular techniques of diagnosis, distinguishing dermoscopic and clinical images, and separating references based on the classifications in the literature. Given the importance of feature descriptors in the classification of PSL, there are some reasons why this categorization is relevant: it provides an overview of extant techniques used in the extraction of PSL, highlighting the variance between dermoscopic and clinical feature descriptors, and providing an aggregate list of the related important references.

So far, within the context of experiment, CAD systems have been found to perform well, thereby being accepted by patients. Nevertheless, presently, the ability of the system in terms of providing the best diagnostic results or replacing the intervention of hispathology or skill of the clinicians is yet to be proven. Regardless of this, the use of such systems has been employed for educational purpose of general practitioners, therefore providing professional clinicians with advanced knowledge, alongside second opinions during the process of screening [127,128]. In other words, “clinical diagnosis support system” could be a more appropriate word to describe CAD system for skin cancer at their present state of development. Finally, it is crucial to have a benchmark dataset that is publicly accessible for the evaluation of algorithms and new methods being developed. This can help in improving the quality of output produced by these systems. When the dataset has been created, each PSL image should be complemented with a ground truth definition of the lesion’s border alongside its diagnosis with extra dermoscopy reports from as many dermatologists as possible [130]. For a very long time now, this kind of dataset has been highly expected, and it is only in recent times that few of such databases with the aforementioned characteristics have been made available for public use [134].

8 The role of AI for skin cancer identification

Esteva et al. [142] were the first to record a significant advancement in this field; they used a DL technique on a combined skin dataset consisting of 129,450 images representing 2,032 distinct skin lesion diseases, drawn from dermoscopic and clinical images. The algorithm was tested on images and compared to 21 dermatologists who are board certified to see how well it differentiated and classified carcinomas, benign seborrheic keratoses, melanomas, and benign nevi. It was observed that the AI performed according to the performance expectations of dermatologists in terms of skin cancer classification. These authors performed tasks of diagnosis and classification of skin lesion through the use of three kinds of modalities, including histopathology images, dermoscopic images, and clinical images. First, skin lesion datasets that are publicly accessible are analyzed, and then solutions associated with AI are discussed in different sub-sections according to each kind of imaging modality. The challenges of ISIC are briefly discussed.

8.1 Publicly available datasets for skin cancer

Cohort data and other datasets are invaluable to the promotion of skin cancer detection and classification research as these are standard and usable datasets that can be used to test and compare models. These datasets typically contain images of various skin disorders, including malignant and benign lesions, which are essential for training ML and DL algorithms for classification, detection, and diagnosis of skin cancer. Given the challenges in obtaining large volumes of high-quality medical images, these datasets not only support the replication of scientific results but also help address the issue of dataset scarcity. In this context, the resources under discussion contribute to promoting new approaches to diagnosing early-stage skin cancer by providing authors with access to high-resolution skin lesion images that may help them to develop tools that could be used in clinic 1 day.

  1. ISIC Archive: This is a gallery that is made up of several dermoscopic and clinical skin lesions from different parts of the world. Examples of such datasets include ISIC Challenges datasets [143], HAM10000 [144], and BCN20000 [145].

  2. Interactive Atlas of Dermoscopy [146]: This is a dataset that is made up of 1,000 clinical cases, consisting of 490 seborrheic keratosis and 270 melanomas. Each of the two sets contains a minimum of two images (close-up clinical and dermoscopic). This dataset is commercially available for the purpose of research, and can be obtained at a rate of €250.

  3. Dermofit Image Library [147]: This is a commercially available dataset containing 1,300 high-resolution images with 10 types of skin lesions. The dataset has a one-off license at a rate of €75. In addition, it also has a license for academic use.

  4. PH2 Dataset [148]: This dataset consists of 200 dermoscopic images, out of which 160 are nevi cases and 40 are melanoma cases. Unlike other aforementioned dataset, this dataset can be freely obtained upon the completion of a short online registration form.

  5. MED-NODE Dataset [149]: The dataset is made up of 170 clinical images, consisting of 100 nevi and 70 melanoma cases. Just like the PH2 dataset, this dataset can be downloaded freely for the purpose of research.

  6. Asan Dataset [150,151]: This dataset contains a total of 17,125 clinical images of 12 classes of skin diseases common among the people of Asia. The Asan Test Dataset which contains only 1,276 images can be freely downloaded for research purposes.

  7. Hallym Dataset [150]: The Hallym Dataset is made up of 125 clinical images of BCC cases.

  8. SD-198 Dataset [152]: It is a dataset of clinical skin lesion, which contains 6,584 clinical images of 198 types of skin diseases. The use of mobile phone cameras and digital cameras was employed in capturing the images in the dataset.

  9. SD-260 Dataset [153]: In comparison to the last version of SD-198 dataset, this version of the dataset is a more balanced one due to the fact that the class size distribution is controlled with preservation of 10–60 images for every category. The dataset is made up of 20,600 images of 260 kinds of skin diseases.

  10. Dermnet NZ [154]: This is one of the most diverse dataset as it contains a robust collection of dermoscopic, clinical, and histological images of a wide range of skin diseases. The dataset is available for academic use, but some of the images with higher resolution can only be accessed at a fee. In other words, a portion of the dataset is not freely accessible.

  11. Derm7pt [155]: There are 1,011 dermoscopic images in this dataset, and of all, 759 are nevi cases, while the remaining 252 images are cases of melanoma. The dataset is accompanied by a seven-point check-list criteria.

  12. The Cancer Genome Atlas [156]: This is one of the largest set of public pathological skin lesion slides containing 2,871 cases. It can be obtained for research purpose.

8.2 AI in dermoscopic images

Dermoscopy refers to the process of inspecting skin lesions through the use of a dermoscopy device that is equipped with a top-grade magnifying lens alongside the lighting system that can be polarized. The capturing of dermoscopic images is done using a digital single lens reflex of high-resolution (DSLR), or even smartphone camera attachments. Ever since several publicly available dermoscopic datasets were introduced, the adoption of dermoscopic images for AI algorithms is rapidly gaining popularity in the area of research.

The diagnosis of lesions through the use of a dermoscopic skin lesion dataset has been featured in so many AI works which are highlighted subsequently. For example, in the work done by Codella et al. [157], a combination of DL algorithms was developed, and then the efficiency of the network was compared with 8 dermatologists for the classification of 100 skin lesions as malignant and benign. It was observed that the proposed ensemble approach demonstrated better performance, recording an accuracy rate of 76% and specificity of 62%, whereas dermatologists achieved an accuracy rate of 70.5% and a specificity rate of 59%. In the study carried out by Haenssle et al. [158], a DL algorithm referred to as InceptionV4 was trained on a large dermoscopic dataset which contains a combination of over 100,000 melanoma and benign lesions images. They carried out a comparison of the proposed method’s performance with 58 dermatologists. Their evaluation was performed using a small dataset of 100 cases, of which 25 were melanoma cases and the remaining 75 were benign lesions, and the results revealed that an average sensitivity rate of 86.6% and specificity of 71.3% was recorded for the performance of dermatologists. On the other hand, a sensitivity rate of 95% and a specificity rate of 63.8% were recorded for the DL method. In Brinker et al. [159], the efficiency of a DL method referred to as ResNet50 was compared with the performance of 157 board-certified dermatologists from 12 hospitals in Germany. A dataset of 100 dermoscopic images known as MClass-D made up of 20 melanoma and 80 nevi cases was used for this performance evaluation. The overall rate of sensitivity achieved by a dermatologist was 74.1%, while the specificity rate of 60.0% was achieved by the dermatologists. On the other hand, the specificity rate recorded for the DL method was 69.2%, while the sensitivity rate was 84.2%.

In the work done by Tschandl et al. [160], the use of ResNet50 and InceptionV3 DL systems was employed on a mixed dataset of 5,829 close-up and 7,895 dermoscopic lesion images with the aim of diagnosing non-pigmented skin cancers. To evaluate the performance of the method, a comparison was done involving 95 dermatologist who were grouped into three according to their experience. Based on the results, the accuracy achieved by the DL algorithms was the same as that achieved by human experts and was better than that of human raters that were in the beginner and intermediate categories. In another work, the authors, Maron et al. [161], carried out a comparison of a DL technique (ResNet50) and the performance of 112 German dermatologists based on specificity and sensitivity. The comparison was carried out to test the system’s ability in terms of multiclass classification of skin lesions including, SCC, nevi, benign keratosis, BCC, and melanoma. It was reported that the performance of the DL method was better than that of dermatologists at a significant level of (p ≤ 0.001). Similarly, a comparison was done by Haenssle et al. [162], involving dermatologists and a DL system that is designed based on InceptionV4 (endorsed by the European Union as a medical device). The experiment was done using a dermoscopic dataset consisting of 100 cases, out of which 40 were malignant and 60 were benign lesions. The research was done in two phases: phase I involved the use of dermoscopic images, whereas phase II involved the use of clinical close-up images, clinical information, and dermoscopic images. The authors reported that in the first phase, a sensitivity rate of 95% and specificity rate of 76.7% was achieved by the DL architecture, while the dermatologists achieved a sensitivity rate of 89% and a specificity rate of 80.7%. However, in level II, given more information, there was an increase in the rate of sensitivity of dermatologists to 94.1%, but no increase was recorded for the specificity. In another comparative study, the performance of two AI algorithms was compared by Tschandl et al. [163]. The performance was compared using a test set consisting of 1,511 images. The comparative result revealed that the performance of the AI algorithm was better than that of the human readers in terms of accurate diagnosis of skin cancer.

Remarkable progress has been recorded in the area of AI research, especially for skin cancer diagnosis. Even though, researchers and developers of DL algorithms have claimed that the performance of DL algorithms is better than the performance of clinicians, such algorithms are accompanied by several challenges that hinder them from being a complete diagnostic system. This is because the experiments that assess the performance of the algorithms are carried out within controlled environments and not in real-time diagnosis of skin cancer. When such diagnosis are performed in real-life settings, certain factors are taken into consideration including the patient’s eye and hair colour, extant sun damage, skin colour, ethnicity, occupation, pre-existing illnesses, the number of nevi, medicines, habits like smoking, consumption of alcohol, and exposure to the Sun, reaction to treatments in time past, and other health details of the patient. Nevertheless, existing models or algorithms of DL solely depend on just the imaging data of the patients. In addition, the risk of misdiagnosis is present in those models each time they are used on skin conditions or lesions that are absent in the training dataset.

In this study, more possibilities and opportunities for developing robust algorithms that can enable clinicians achieve better diagnosis of skin cancer are further explored. It is necessary for a dermatologist and computer vision societies to work collaboratively so that currently available AI solutions can be improved, while the accuracy of techniques of skin cancer diagnosis is enhanced. The use of AI in the diagnosis of skin cancer can potentially change the way skin cancer is currently diagnosed, while providing clinical solutions that are remotely accessible, accurate, and cost-effective.

8.3 The role of XAI for skin cancer diagnosis

XAI solutions, on the other hand, are meant to help improve the explainability of black-box AI systems by giving people a look into exactly how these models came to their conclusions. This is especially important in serious diagnosis such as skin cancer detection where the model’s choice may have profound implications, and practitioners need to know why the model made that choice. We have expanded on why this interpretability is needed when explaining the ethical, legal, and practical concerns tied with the use of cryptic models in clinical care.

CNNs are probably the most well-known DL techniques that were trained to recognize patterns in images, which are more intricate than in other data types, but, at the same time, it is rather difficult to explain their responses. Clinicians have to appreciate the input features on the skin lesion images (e.g., texture, pigmentation, border) in relation to the classification models put forward by the developers of these models. We evaluated the applicability of black-box models in clinical environments and the usefulness and importance of interpretability in clinical decision making. In this work, we present an overview of XAI techniques that could be applied to or are currently being used in skin cancer detection:

  • Grad-CAM (Gradient-Weighted Class Activation Mapping): This method gives visual interpretations where in an image it outlines the area that has more importance in coming up with the model’s decision. Indeed, in skin cancer diagnosis, the Grad-CAM could assist clinicians to view which parts of the skin lesion the model attended to in determining the malignancy or benignancy of the lesion. We review how and where Grad-CAM has been used in analyzing dermoscopic images and consider both the benefits and drawbacks of using it to generate localized explanations.

  • LIME (Local Interpretable Model-agnostic Explanations): LIME decomposes the black box model locally by creating simpler, interpretable models for a single prediction. We discuss in detail how LIME can be used to decode skin cancer detection results and then compare its feasibility and clinical relevance to Grad-CAM.

  • SHAP (SHapley Additive exPlanations): SHAP values offer a common measure of feature importance because it brings each model prediction and counts performance to individual input features. We explain how SHAP is used to identify the weight that each feature such as colour, shape, or texture contributes towards the model decision and provides a more thorough explanation of the model outputs. Because it can be adapted for both DL and ML models, it is suitable for skin cancer diagnosis.

For future works, we suggest the following:

  • Improving the reliability of explanations with regard to different architectures of DL via the combination of XAI with ensemble approaches.

  • Creating new domain adaptation XAI tools, specifically for medical image analysis that is more appropriate for dermatology.

  • Assessing the application of XAI methods in clinical environments focusing on the effects of applying those methods on the clinicians’ decision-making processes, time required for diagnosis, and diagnoses’ accuracy in the actual world.

9 The research challenges

A comprehensive literature review identified several limitations. Training deep CNNs, especially complex architectures, requires substantial computational resources, leading to long training times. Additionally, representing data through quantum encoding in quantum CNNs presents significant challenges, making it difficult to integrate quantum principles into the encoding process. Moreover, CNNs struggle with tasks requiring long-range dependencies. Addressing these limitations can enhance the performance of quantum-dependent neural networks, ultimately improving results. The research challenges are summarized as follows:

  • Comprehensive Training: Extensive training is necessary for neural network-based skin cancer detection methods, which is a big hurdle. The system must undergo a comprehensive training, which is time consuming and demands powerful hardware, to evaluate and understand dermoscopic images’ characteristics effectively.

  • Lesion Size Variation: Different lesions might range in size, which adds another layer of difficulty. In the 1990s, a team of researchers from Italy and Austria gathered a large number of images of melanoma lesions, both benign and malignant [54]. Lesions may be accurately identified with a diagnostic accuracy of 95–96% [55]. Nevertheless, the diagnostic procedure was considerably more challenging and prone to errors when dealing with early-stage lesions that were 1 mm or 2 mm in size.

  • Images of Light-Skinned People in Public Databases: Most persons included in the current standard dermoscopy databases are white, with a few exceptions from the United States, Australia, and Europe. For a neural network to accurately identify skin cancer in individuals with dark skin tones, it has to learn to consider skin colour [56]. Nevertheless, this can be accomplished only if the neural network trains on a significant enough dataset that includes photographs of individuals with dark skin. To improve the precision of skin cancer detection algorithms, it is essential to have datasets that include enough images of lesions on both light- and dark-skinned individuals.

  • Very Minimal Interclass Dissimilarity in Skin Cancer Images: Medical images differ significantly from other types of images in terms of interclass variance; for example, compared to photographs of cats and dogs, there is substantially less fluctuation in the difference between images of melanoma and non-melanoma skin cancer lesions. In addition, melanoma and birthmarks are extremely hard to tell apart. Because certain diseases’ lesions seem so identical, diagnosing them can be challenging. Image processing and classification become an extremely complicated task due to this restricted variety.

  • Datasets of Skin Cancer Without a Balance: Real-world datasets are highly imbalanced when diagnosing skin cancer. There is a significant disparity in the number of photos for each skin cancer type in imbalanced datasets. Because of limitations in the number of photos for less frequent skin cancer types compared to the number of images for more common ones, it is difficult to make broad conclusions based on the visual characteristics of dermoscopic images.

  • Utilization of Diverse Optimization Methods: Automated skin cancer detection relies heavily on pre-processing and lesion edge detection. We can enhance the performance of these systems by investigating different optimization algorithms, such as particle swarm optimization, social spider optimization, ant colony optimization, and artificial bee colony algorithm.

  • Acquiring Images for Dermatology: Acquiring photos in dermatology often involves taking close-up images of dermatoses or lesions. Since the surrounding structures are typically not included in these images, the anatomical context is sometimes lost when focusing on the lesion. In addition, dermatologists are using more and more photographs taken with smartphones, which aligns with the exponential growth of digital skin imaging apps. Isolated datasets, unique lighting settings, etc., are some greedy circumstances utilized to train the techniques proposed for identifying melanomas from inconsistent dermoscopic images. As a result, these systems give localized findings that cannot be employed generically. Along with these issues, the collected images suffer greatly in quality because of the fluctuating lighting conditions (e.g., specular reflection) throughout the acquisition period. One solution to the ever-present issue of colour inconsistency is suggested in [132]: a generative adversarial neural network. To overcome the problems with image capture, these technologies need to be more widely used, and more creative solutions for unpredictable dermoscopic images need to be developed.

  • Several studies have been working on improving CAD methods for the classification of skin cancer in the last few years. Due to the advent of DL, ML methods were largely used to create CAD systems. These ML-based techniques, however, are limited in their ability to identify skin diseases because of the difficulties associated with feature engineering and the constraints of handcrafted features. Conversely, DL algorithms are more accurate and efficient at automatically extracting meaningful features from huge amounts of data. As a consequence, in recent years, a significant number of the classification of skin cancer challenges have been solved with excellent results using DL-based techniques like CNN.

  • Limited Capacity to Generalize Across Domains: In the challenging task of classifying skin cancer, the model’s capacity to generalize is frequently less than that of a skilled dermatologist. First off, the overfitting issue persists even in cases when a substantial quantity of synthetically created data is comparable due to the limited size of skin image datasets. Second, most studies are limited to dermatological images (dermoscopic and histological images) obtained with standardized medical devices. Image data from various devices that pertain to dermatology has not been extensively studied. A trained model performs much worse when it is used to a new set of data in a different domain.

  • The kind of image noise and heterogeneous devices: The reliability of algorithms for the process of identifying skin cancer is challenged by a variety of noises coming from heterogeneous sources and images of skin diseases. The DL model may equal or even outperform dermatologists in terms of diagnosis if trained on high-quality skin tumour datasets. However, when tested with diverse images, the skin cancer model for classification often fails to produce adequate classification results because it is dependent on samples taken with varied machines, brightness options, and backdrops. Classification is further complicated by the wide variations in magnification, perspective, and illumination found in digital images like those from smartphones).

  • Moving Towards Quicker and More Effective Classification Frameworks: The computational cost of the model still has to be taken into account, considering that a growing several of DL methods have been effectively utilized for the classification of skin cancer with outstanding classification results. First off, a lot of high-quality images of skin diseases now feature huge pixels because of advancements in imaging technology. Histological scans, for instance, have a resolution of more than 50,000 × 50,000 pixels and consist of millions of pixels. It therefore requires more time and computer power to train them. Second, as the accuracy of the DL models increases, so does its computational complexity. This means that applying the model to different medical devices or mobile devices will become more expensive. Here, we provide the most recent approaches to creating a successful skin cancer network.

Nevertheless, after further review, it seems that the problems with skin cancer classification are not as simple as those found in the non-medical area such as ImageNet and other types. In the beginning, the variations between the various skin cancer classifications cause numerous datasets of skin images to be imbalanced, which raises the possibility of a false positive by the diagnosis method. Furthermore, many datasets only offer a small number of images due to the labour-intensive and highly specialized nature of correct annotation (for example, the ISIC dataset, which comprises approximately 13,000 skin images, is currently the biggest openly accessible skin disease dataset).

10 The future directions of research

Without fail, AI researchers will assert that their algorithms can detect skin cancer more accurately than dermatologists. However, this differs from how things work in the real world because these trials follow strict protocols in controlled environments. Given the several obstacles discussed earlier, it is clear that these performance assessments do not reflect the actual diagnostic work done by skin cancer specialists. Because they lack domain expertise and cannot conduct logical deductions to determine the link between various skin lesion types, DL algorithms are often seen as opaque because they learn solely from the pixel values of imaging datasets. However, the following opportunities suggest that DL may 1 day be useful for skin cancer diagnosis.

  • Using AI and Next-Generation Sequencing to Improve Skin Cancer Diagnosis Diagnostics: Advances in next-generation sequencing (NGS) technology have made it possible to enhance data output while simultaneously improving associated efficiency. Reading lengths are used to classify NGS. Genome-wide or specific RNA or DNA area nucleotide order determination is a crucial application of NGS. Due to high volumes of DNA sequencing technologies and methodologies, new technologies can now be commercialized. Using less DNA and RNA input data while maintaining speed and accuracy is the objective of DNA sequencing technologies. Among all malignancies, SCC has a tumour mutation load that is among the greatest. The goal of comparing gene alterations in localized and metastatic high-risk SCCs using targeted NGS is to find essential distinctions and improve targeted therapy options. The development of molecular techniques has significantly expedited the discovery of new viruses, such as the papillomavirus. NGS can be used with enhanced procedures to aid in detecting known and undiscovered human papillomaviruses.

  • It would help to have a balanced dataset and carefully choose the samples to get the most out of DL methods for classification tasks. Therefore, it is essential to have balanced datasets that include instances that fully represent the classification of that specific skin lesion; in this regard, the advice of seasoned dermatologists might be invaluable.

  • Automated Skin Cancer Decision Support Systems Powered by AI Explanation: Skin cancer detection can be aided by decision support systems driven by AI, which are computer programs that help with decision-making and picking the correct course of action. Their hints to typical methods and looping patterns allow designers of DL classification methods choices for flexibility. Pre-trained models of deep neural networks with transfer learning (TL) can be used to initialize support systems for skin lesion categorization and localization. These days, automated DL techniques are a part of decision support systems. Through the use of TL on data that are skewed, these algorithms are taught and fine-tuned. Using an average pooling layer, the model retrieves features; nevertheless, these features are insufficient. To aid in decision-making, an improved genetic algorithm based on heuristics is used to extract essential and relevant traits, which are then passed on to a classifier. One possible alternative to invasive diagnostic procedures is using AI-powered decision support systems, which can aid physicians in diagnosing.

  • Digital pathology computer-aided diagnosis: Computer vision and pathology groups have been interested in creating such a system due to the recent advances in whole-slide photography for digital image analysis and GPU clusters for robust computation. Using DL algorithms for sliding window classification and then aggregating those classifications to identify prevalent histopathological patterns is one of the most common AI approaches to dimensionality.

  • Skin Cancer Diagnosis Based on Wearable Computing: Computing on human-wearable accessories is a new paradigm in wearable computing. Wearable computers are any little device that can process data and perform computations while worn on the body. The identification of cancer has made use of wearable computers. The high cost and awareness of wearable devices, along with their non-cooperative form factor, clinical inertia, improper connection, and high fragility and bendability, are some obstacles that have prevented their widespread clinical acceptance. Companies and researchers should work together in the future to find ways to use wearable computing as a viable alternative to traditional cancer detection methods while also lowering the associated costs. Computing effectiveness, efficiency, power consumption, adaptability, and real-time performance are all areas where these event-driven solutions shine. Incorporating these capabilities into wearable devices has the potential to improve performance significantly.

  • Consistency of colour under different lighting conditions and with varying data types: The skin lesion images in the publicly accessible dermoscopy and clinical datasets were taken using varied lighting settings and acquisition equipment, which might compromise the AI systems’ performance. Shades of gray and max-RGB are colour constancy algorithms that have been shown in several research to increase the performance of ML techniques regarding multisource image classification. The shadow of gray method is a pre-processing technique to standardize the lighting effect and illumination on dermoscopic images of skin lesions.

  • The generative adversarial network, or GAN, is one kind of DL architecture that is gaining interest in medical imaging. The primary use of GAN is to circumvent dataset limitations by producing high-quality false image data. GAN can be employed to produce lifelike synthetic images of skin lesions to avoid the shortage of annotated data related to skin cancer. Because patient prevalence biases the distribution of skin lesion classes in publically accessible datasets, GAN can be utilized to provide imaging data for under-represented skin lesion classes or uncommon skin cancer types as Kaposi sarcoma, sebaceous carcinoma, or Merkel cell carcinoma.

This comparison identifies the strengths and weaknesses of each method based on the various aspects such as sensitivity, specificity, computational time, and amount of interpretability required. This addition is intended to help the reader to better understand how each method works and the cost-benefit consideration associated with their use. Table 2 enables the researcher and clinician to distinguish the various methods based on their merits and demerits at a glance. For instance, newly sophisticated approaches such as CNNs and hybrid structure models exhibit the higher sensitivity and specificity, but at the same time require better computational power and are relatively harder to explain. While SVM and RF are less accurate and do require feature engineering they do provide for easy interpretation for big data sets.

Table 2

Comparison overview of ML and DL techniques for skin cancer detection

Method Strength Limitations Sensitivity (%) Specificity (%) Computational cost Interpretability
SVM High accuracy on small datasets, works well for binary classification problems Struggles with large datasets, sensitive to noise 80–90 85–90 Low Moderate (kernel-based interpretations)
Random Forest (RF) Handles high-dimensional data well, less prone to overfitting due to averaging Can be computationally expensive for large datasets, less interpretable due to ensemble nature 85–92 88–94 Moderate to High Low
KNN Simple, effective for small datasets, no training phase required High computational cost during prediction, struggles with high-dimensional data 75–85 70–85 High (in prediction phase) Low
CNN Excellent for image-based analysis, strong feature extraction capabilities, high accuracy on large datasets Requires large labelled datasets, computationally expensive, black-box nature 92–96 90–95 Very high (GPU/TPU often required) Low
Deep Neural Networks (DNN) Capable of learning complex patterns, performs well with large datasets Prone to overfitting if not properly tuned, black-box nature 85–93 88–93 High (requires powerful hardware) Low
TL Pre-trained models allow for quicker convergence, excellent for small datasets, reduces need for large labelled datasets Still requires significant computational resources, limited flexibility in certain layers, may not generalize well to very different datasets 90–95 89–94 High (depends on pre-trained models used) Low
ANN Flexible and capable of handling large datasets, suitable for complex non-linear relationships Computationally expensive, prone to overfitting, low interpretability 80–90 83–90 Moderate to High Low
Hybrid Models (e.g., CNN + SVM) Combines the strengths of different techniques, often achieves higher accuracy Complexity increases, interpretability decreases due to multi-model nature 90–96 92–96 Very High Low

Thus, this crowded comparison enriches the understanding of the differences in performance and interactivity of various ML and DL methodologies to help readers draw appropriate conclusions about which method is best to apply in skin – cancer detection situations.

11 Conclusion

This study has conducted a detailed review of the current nature of research concerning a fully automatic diagnosis system for detecting pigmented skin lesions. However, several issues should be addressed: the scarcity of large datasets of high quality, lack of methods for balancing datasets, and modelling the models’ performance for various populations and skin types. Other challenges common with such models include interpretability, computational cost, and how to incorporate the models into clinical practice among them. Some research limitations that have been highlighted in this review are as follows: more emphasis needs to be given to building more accurate and generalized models that can be of real help to dermatologists for early screening. Therefore, future research should focus on overcoming the limitations of current analysis to develop more effective Explainable AI (XAI) models for early skin cancer diagnosis, ultimately improving patient outcomes. Additionally, interdisciplinary and replication studies should be conducted to validate best practices and enhance advancements in this critical field.

Acknowledgements

The authors would like to express their gratitude to the School of Computer Sciences at Universiti Sains Malaysia for their support and facilities.

  1. Funding information: Authors state no funding involved.

  2. Author contributions: Ali. H. Alzamili: writing – review and editing, validation, methodology, conceptualization. Nur Intan Raihana Ruhaiyem: writing – review and editing, supervision, project administration, methodology, and conceptualization.

  3. Conflict of interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

  4. Data availability statement: No data were used for the research described in the article.

References

[1] Sevli O. A deep convolutional neural network-based pigmented skin lesion classification application and experts evaluation. Neural Comput Appl. 2021;33:1–12. 10.1007/s00521-021-05929-4.Suche in Google Scholar

[2] Hwang YN, Seo MJ, Kim SM. A segmentation of melanocytic skin lesions in dermoscopic and standard images using a hybrid two-stage approach. BioMed Res Int. 2021;2021:5562801. 10.1155/2021/5562801.Suche in Google Scholar PubMed PubMed Central

[3] Soenksen LR, Kassis T, Conover ST, Marti-Fuster B, Birkenfeld JS, Tucker-Schwartz J, et al. Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images. Sci Transl Med. 2021;13(581):eabb3652. 10.1126/scitranslmed.abb3652.Suche in Google Scholar PubMed

[4] D’Alonzo M, Bozkurt A, Alessi-Fox C, Gill M, Brooks DH, Rajadhyaksha M, et al. Semantic segmentation of reflectance confocal microscopy mosaics of pigmented lesions using weak labels. Sci Rep. 2021;11(1):1–13. 10.1038/s41598-021-82969-9.Suche in Google Scholar PubMed PubMed Central

[5] Zhao M, Kawahara J, Shamanian S, Abhishek K, Chandrashekar P, Hamarneh G. Detection and longitudinal tracking of pigmented skin lesions in 3D total-body skin textured meshes. arXiv preprint arXiv:2105.00374; 2021. 10.48550/arXiv.2105.00374.Suche in Google Scholar

[6] Gold PJ, Indra G, Akila K, Pavithra P. Pre clinical diagnosis of melanomatumor in skin using machine learning techniques. Ann Rom Soc Cell Biol. 2021;25:3494–501.Suche in Google Scholar

[7] Maron RC, Haggenmüller S, von Kalle C, Utikal JS, Meier F, Gellrich FF, et al. Robustness of convolutional neural networks in recognition of pigmented skin lesions. Eur J Cancer. 2021;145:81–91. 10.1016/j.ejca.2020.11.020.Suche in Google Scholar PubMed

[8] Jinnai S, Yamazaki N, Hirano Y, Sugawara Y, Ohe Y, Hamamoto R. The development of a skin cancer classification system for pigmented skin lesions using deep learning. Biomolecules. 2020;10(8):1123. 10.3390/biom10081123.Suche in Google Scholar PubMed PubMed Central

[9] Estrada S, Shackelton J, Cleaver N, Depcik-Smith N, Cockerell C, Lencioni S, et al. Development and validation of a diagnostic 35-gene expression profile test for ambiguous or difficult-to-diagnose suspicious pigmented skin lesions. SKIN J Cutan Med. 2020;4(6):506–22. 10.25251/skin.4.6.3.Suche in Google Scholar

[10] Lucius M, De All J, De All JA, Belvisi M, Radizza L, Lanfranconi M, et al. Deep neural frameworks improve the accuracy of general practitioners in the classification of pigmented skin lesions. Diagnostics. 2020;10(11):969. 10.3390/diagnostics10110969.Suche in Google Scholar PubMed PubMed Central

[11] Bareiro Paniagua LR, Leguizamón Correa DN, Pinto-Roa DP, Vázquez Noguera JL, Salgueiro Toledo LA. Computerized medical diagnosis of melanocytic lesions based on the ABCD approach. CLEI Electron J. 2016;19(2):6. 10.19153/cleiej.19.2.5.Suche in Google Scholar

[12] She Z, Liu Y, Damatoa A. Combination of features from skin pattern and abcd analysis for lesion classification. Skin Res Technol. 2007;13(1):25–33. 10.1111/j.1600-0846.2007.00181.x.Suche in Google Scholar PubMed

[13] Argenziano G, Catricalà C, Ardigo M, Buccini P, De Simone P, Eibenschutz L, et al. Seven-point checklist of dermoscopy revisited. Br J Dermatol. 2011;164(4):785–90. 10.1111/j.1365-2133.2010.10194.x.Suche in Google Scholar PubMed

[14] Garcia-Arroyo JL, Garcia-Zapirain B. Recognition of pigment network pattern in dermoscopy images based on fuzzy classification of pixels. Comput Methods Prog Biomed. 2018;153:61–9. 10.1016/j.cmpb.2017.10.005.Suche in Google Scholar PubMed

[15] Argenziano G, Zalaudek I, Corona R, Sera F, Cicale L, Petrillo G, et al. Vascular structures in skin tumors: a dermoscopy study. Arch Dermatol. 2004;140(12):1485–9. 10.1001/archderm.140.12.1485.Suche in Google Scholar PubMed

[16] Mackie RM, Doherty VR. Seven-point checklist for melanoma. Clin Exp Dermatol. 1991;16(2):151–2. 10.1111/j.1365-2230.1991.tb00329.x.Suche in Google Scholar PubMed

[17] Argenziano G, Fabbrocini G, Carli P, De Giorgi V, Sammarco E, Delfino M. Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the abcd rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch Dermatol. 1998;134(12):1563–70. 10.1001/archderm.134.12.1563.Suche in Google Scholar PubMed

[18] Capdehourat G, Corez A, Bazzano A, Alonso R, Musé P. Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions. Pattern Recognit Lett. 2011;32(16):2187–96. 10.1016/j.patrec.2011.06.015.Suche in Google Scholar

[19] Korotkov K, Garcia R. Computerized analysis of pigmented skin lesions: a review. Artif Intell Med. 2012;56(2):69–90. 10.1016/j.artmed.2012.08.002.Suche in Google Scholar PubMed

[20] Maglogiannis I, Doukas CN. Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans Inf Technol Biomed. 2009;13(5):721–33. 10.1109/TITB.2009.2017529.Suche in Google Scholar PubMed

[21] Korotkov K. Automatic change detection in multiple pigmented skin lesions. PhD thesis. Spain: University of Girona; 2014. http://hdl.handle.net/10256/9276.Suche in Google Scholar

[22] Celebi ME, Iyatomi H, Schaefer G, Stoecker WV. Lesion border detection in dermoscopy images. Comput Med Imaging Graph. 2009;33(2):148–53. 10.1016/j.compmedimag.2008.11.002.Suche in Google Scholar PubMed PubMed Central

[23] Lee T, Ng V, Gallagher R, Coldman A, McLean D. Dullrazor R: A software approach to hair removal from images. Comput Biol Med. 1997;27(6):533–43. 10.1016/S0010-4825(97)00020-6.Suche in Google Scholar

[24] Wighton P, Lee TK, Atkins MS. Dermascopic hair disocclusion using inpainting. In Medical Imaging. San Diego, California, United States: International Society for Optics and Photonics; 2008. p. 691427. 10.1117/12.770776.Suche in Google Scholar

[25] Kiani K, Sharafat AR. E-shaver: An improved dullrazor R for digitally removing dark and light-colored hairs in dermoscopic images. Comput Biol Med. 2011;41(3):139–45. 10.1016/j.compbiomed.2011.01.003.Suche in Google Scholar PubMed

[26] Toossi MT, Pourreza HR, Zare H, Sigari MH, Layegh P, Azimi A. An effective hair removal algorithm for dermoscopy images. Skin Res Technol. 2013;19(3):230–5. 10.1111/srt.12015.Suche in Google Scholar PubMed

[27] Nguyen NH, Lee TK, Atkins MS. Segmentation of light and dark hair in dermoscopic images: a hybrid approach using a universal kernel. In SPIE Medical Imaging. San Diego, California, United States: International Society for Optics and Photonics; 2010. p. 76234N. 10.1117/12.844572.Suche in Google Scholar

[28] Xie FY, Qin SY, Jiang ZG, Meng RS. PDE-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma. Comput Med Imaging Graph. 2009;33(4):275–82. 10.1016/j.compmedimag.2009.01.003.Suche in Google Scholar PubMed

[29] Fiorese M, Peserico E, Silletti A. Virtualshave: automated hair removal from digital dermatoscopic images. In Conference proceedings Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol. 2011. IEEE Engineering in Medicine and Biology Society. Annual Conference; 2010. p. 5145–8. 10.1109/IEMBS.2011.6091274.Suche in Google Scholar PubMed

[30] Huang A, Kwan SY, Chang WY, Liu MY, Chi MH, Chen GS. A robust hair segmentation and removal approach for clinical images of skin lesions. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2013. p. 3315–8. 10.1109/EMBC.2013.6610250.Suche in Google Scholar PubMed

[31] Abbas Q, Celebi ME, García IF. Hair removal methods: a comparative study for dermoscopy images. Biomed Signal Process Control. 2011;6(4):395–404. 10.1016/j.bspc.2011.01.003.Suche in Google Scholar

[32] Abbas Q, Fondón I, Rashid M. Unsupervised skin lesions border detection via two-dimensional image analysis. Comput Methods Prog Biomed. 2011;104(3):e1–15. 10.1016/j.cmpb.2010.06.016.Suche in Google Scholar PubMed

[33] Abbas Q, Garcia IF, Emre Celebi M, Ahmad W. A feature-preserving hair removal algorithm for dermoscopy images. Skin Res Technol. 2013;19(1):e27–36. 10.1111/j.1600-0846.2011.00603.x.Suche in Google Scholar PubMed

[34] Barata C, Marques JS, Rozeira J. A system for the detection of pigment network in dermoscopy images using directional filters. IEEE Trans Biomed Eng. 2012;59(10):2744–54. 10.1109/TBME.2012.2209423.Suche in Google Scholar PubMed

[35] Gomez DD, Butakoff C, Ersboll BK, Stoecker W. Independent histogram pursuit for segmentation of skin lesions. IEEE Trans Biomed Eng. 2008;55(1):157–61. 10.1109/TBME.2007.910651.Suche in Google Scholar PubMed PubMed Central

[36] Celebi ME, Iyatomi H, Schaefer G. Contrast enhancement in dermoscopy images by maximizing a histogram bimodality measure. In 2009 16th IEEE International Conference on Image Processing (ICIP). IEEE; 2009. p. 2601–4. 10.1109/ICIP.2009.5413990.Suche in Google Scholar

[37] Madooei A, Drew MS, Sadeghi M, Atkins MS. Automated pre–processing method for dermoscopic images and its application to pigmented skin lesion segmentation. In Color and Imaging Conference. Vol. 2012. Society for Imaging Science and Technology; 2012. p. 158–63. 10.2352/CIC.2012.20.1.art00028.Suche in Google Scholar

[38] Koehoorn J, Sobiecki AC, Boda D, Diaconeasa A, Doshi S, Paisey S, et al. Automated digital hair removal by threshold decomposition and morphological analysis. In International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing. Springer; 2015. p. 15–26. 10.1007/978-3-319-18720-4_2.Suche in Google Scholar

[39] Mirzaalian H, Lee TK, Hamarneh G. Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and mrf-based multilabel optimization. IEEE Trans Image Process. 2014;23(12):5486–96. 10.1109/TIP.2014.2362054.Suche in Google Scholar PubMed

[40] Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 1998. p. 130–7. 10.1007/BFb0056195.Suche in Google Scholar

[41] Huang A, Chang WY, Liu HY, Chen GS. Capillary detection for clinical images of basal cell carcinoma. In 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI). IEEE; 2012. p. 306–9. 10.1109/ISBI.2012.6235545.Suche in Google Scholar

[42] Lee TK, Claridge E. Predictive power of irregular border shapes for malignant melanomas. Skin Res Technol. 2005;11(1):1–8. 10.1111/j.1600-0846.2005.00076.x Suche in Google Scholar PubMed

[43] Zortea M, Flores E, Scharcanski J. A simple weighted thresholding method for the segmentation of pigmented skin lesions in macroscopic images. Pattern Recognit. 2017;64:92–104. 10.1016/j.patcog.2016.10.031.Suche in Google Scholar

[44] Dalila F, Zohra A, Reda K, Hocine C. Segmentation and classification of melanoma and benign skin lesions. Optik. 2017;140:749–61. 10.1016/j.ijleo.2017.04.084.Suche in Google Scholar

[45] Rehman Mu, Khan SH, Danish Rizvi SM, Abbas Z, Zafar A. Classification of Skin lesion by interference of segmentation and convolotion neural network. 2018 2nd International Conference on Engineering Innovation (ICEI). 2018. p. 81–5. 10.1109/ICEI18.2018.8448814.Suche in Google Scholar

[46] Yang T, Peng S, Hu P, Huang L. Pigmented skin lesion segmentation based on random forest and full convolutional neural networks. In Optics in Health Care and Biomedical Optics VIII. Vol. 10820. Beijing, China: International Society for Optics and Photonics; 2018. p. 108203M. 10.1117/12.2503941.Suche in Google Scholar

[47] Moradi N, Mahdavi-Amiri N. Kernel sparse representation based model for skin lesions segmentation and classification. Comput Methods Prog Biomed. 2019;182:105038. 10.1016/j.cmpb.2019.105038.Suche in Google Scholar PubMed

[48] Vasconcelos FFX, Medeiros AG, Peixoto SA, Reboucas Filho PP. Automatic skin lesions segmentation based on a new morphological approach via geodesic active contour. Cognit Syst Res. 2019;55:44–59. 10.1016/j.cogsys.2018.12.008.Suche in Google Scholar

[49] Tang P, Liang Q, Yan X, Xiang S, Sun W, Zhang D, et al. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging. Comput Methods Prog Biomed. 2019;178:289–301. 10.1016/j.cmpb.2019.07.005.Suche in Google Scholar PubMed

[50] Ibrahim E, Ewees AA, Eisa M. Proposed method for segmenting skin lesions images. In: Hitendra Sarma T, Sankar V, Shaik R, editors. Emerging Trends in Electrical, Communications, and Information Technologies. Lecture Notes in Electrical Engineering. Vol. 569. Singapore: Springer; 2020. 10.1007/978-981-13-8942-9_2.Suche in Google Scholar

[51] Liu L, Tsui YY, Mandal M. Skin lesion segmentation using deep learning with auxiliary task. J Imaging. 2021;7(4):67. 10.3390/jimaging7040067.Suche in Google Scholar PubMed PubMed Central

[52] Abd HJ, Abdullah AS, Alkafaji MSS. A new swarm intelligence information technique for improving information balancedness on the skin lesions segmentation. Int J Electr Comput Eng (IJECE). 2020;10(6):5703–8. 10.11591/ijece.v10i6.pp5703-5708.Suche in Google Scholar

[53] Murugan A, Nair SAH, Preethi AAP, Kumar KS. Diagnosis of skin cancer using machine learning techniques. Microprocess Microsyst. 2021;81:103727. 10.1016/j.micpro.2020.103727.Suche in Google Scholar

[54] Araújo RL, Rabêlo RDAL, Rodrigues JJPC, Silva RRVE. Automatic segmentation of melanoma skin cancer using deep learning. 2020 IEEE International Conference on E-health Networking, Application & Services (HEALTHCOM), Shenzhen, China. 2021. p. 1–6. 10.1109/HEALTHCOM49281.2021.9398926.Suche in Google Scholar

[55] Reis HC, Turk V, Khoshelham K, Kaya S. InSiNet: a deep convolutional approach to skin cancer detection and segmentation. Med Biol Eng Comput. 2022;60:643–62. 10.1007/s11517-021-02473-0.Suche in Google Scholar PubMed

[56] Houssein EH, Abdelkareem DA, Emam MM, Hameed MA, Younan M. An efficient image segmentation method for skin cancer imaging using improved golden jackal optimization algorithm. Comput Biol Med. 2022;149:106075. 10.1016/j.compbiomed.2022.106075.Suche in Google Scholar PubMed

[57] Ahammed M, Al Mamun M, Uddin MS. A machine learning approach for skin disease detection and classification using image segmentation. Healthc Anal. 2022;2:100122. 10.1016/j.health.2022.100122.Suche in Google Scholar

[58] Kaur R, GholamHosseini H, Sinha R, Lindén M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Med Imaging. 2022;22(1):1–13. 10.1186/s12880-022-00829-y.Suche in Google Scholar PubMed PubMed Central

[59] Olayah F, Senan EM, Ahmed IA, Awaji B. AI techniques of dermoscopy image analysis for the early detection of skin lesions based on combined CNN features. Diagnostics. 2023;13(7):1314. 10.3390/diagnostics13071314.Suche in Google Scholar PubMed PubMed Central

[60] Ghosh H, Rahat IS, Mohanty SN, Ravindra JVR, Sobur A. A study on the application of machine learning and deep learning techniques for skin cancer detection. Int J Comput Syst Eng. 2024;18(1):51–9. 10.5281/zenodo.10525954.Suche in Google Scholar

[61] Himel GMS, Islam MM, Al-Aff KA, Karim SI, Sikder MKU. Skin cancer segmentation and classification using vision transformer for automatic analysis in dermatoscopy-based non-invasive digital system. arXiv preprint arXiv:2401.04746; 2024. 10.13140/RG.2.2.30536.49925.Suche in Google Scholar

[62] Hu B, Zhou P, Yu H, Dai Y, Wang M, Tan S, et al. LeaNet: Lightweight U-shaped architecture for high-performance skin cancer image segmentation. Comput Biol Med. 2024;169:107919. 10.1016/j.compbiomed.2024.107919.Suche in Google Scholar PubMed

[63] Celebi ME, Wen QU, Iyatomi HI, Shimizu KO, Zhou H, Schaefer G. A state-of-the-art survey on lesion border detection in dermoscopy images. Dermoscopy Image Anal. 2015;10:97–129. 10.1201/b19107-5.Suche in Google Scholar

[64] Lankton S, Tannenbaum A. Localizing region-based active contours. IEEE Trans Image Process. 2008;17(11):2029–39. 10.1109/TIP.2008.2004611.Suche in Google Scholar PubMed PubMed Central

[65] Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process. 2001;10(2):266–77. 10.1109/83.902291.Suche in Google Scholar PubMed

[66] Vese LA, Chan TF. A multiphase level set framework for image segmentation using the mumford and shah model. Int J Comput Vis. 2002;50(3):271–93. 10.1023/A:1020874308076.Suche in Google Scholar

[67] Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun Pure Appl Math. 1989;42(5):577–685. 10.1002/cpa.3160420503.Suche in Google Scholar

[68] Li F, Shen C, Li C. Multiphase soft segmentation with total variation and h1 regularization. J Math Imaging Vis. 2010;37(2):98–111. 10.1007/s10851-010-0195-5.Suche in Google Scholar

[69] Safi A, Baust M, Pauly O, Castaneda V, Lasser T, Mateus D, et al. Computer–aided diagnosis of pigmented skin dermoscopic images. In MICCAI International Workshop on Medical Content-Based Retrieval for Clinical Decision Support. Springer; 2011. p. 105–15. 10.1007/978-3-642-28460-1_10.Suche in Google Scholar

[70] Kang SH, March R. Multiphase image segmentation via equally distanced multiple well potential. J Vis Commun Image Represent. 2014;25(6):1446–59. 10.1016/j.jvcir.2014.04.008.Suche in Google Scholar

[71] Silveira M, Nascimento JC, Marques JS, Marçal AR, Mendonça T, Yamauchi S, et al. Comparison of segmentation methods for melanoma diagnosis in dermoscopy images. IEEE J Sel Top Signal Process. 2009;3(1):35–45. 10.1109/JSTSP.2008.2011119.Suche in Google Scholar

[72] Adjed F, Faye I, Ababsa F. Segmentation of skin cancer images using an extension of chanandvese model. In 2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE). IEEE; 2015. p. 442–7. 10.1109/ICITEED.2015.7408987. Suche in Google Scholar

[73] Castillejos H, Ponomaryov V, Nino-de-Rivera L, Golikov V. Wavelet transform fuzzy algorithms for dermoscopic image segmentation. Comput Math Methods Med. 2012;2012:578721. 10.1155/2012/578721.Suche in Google Scholar PubMed PubMed Central

[74] Mallat SG. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell. 1989;11(7):674–93. 10.1109/34.192463.Suche in Google Scholar

[75] Ma L, Qin B, Xu W, Zhu L. Multi-scale descriptors for contour irregularity of skin lesion using wavelet decomposition. In 2010 3rd International Conference on Biomedical Engineering and Informatics. Vol. 1. IEEE; 2010. p. 414–8. 10.1109/BMEI.2010.5639551.Suche in Google Scholar

[76] Ma L, Staunton RC. Analysis of the contour structural irregularity of skin lesions using wavelet decomposition. Pattern Recognit. 2013;46(1):98–106. 10.1016/j.patcog.2012.07.001.Suche in Google Scholar

[77] Strang G, Nguyen T. Wavelets and filter banks. SIAM; 1996; Mallat S. A wavelet tour of signal processing. United States: Academic Press; 1999.Suche in Google Scholar

[78] Starck JL, Candès EJ, Donoho DL. The curvelet transform for image denoising. IEEE Trans Image Process. 2002;11(6):670–84. 10.1109/TIP.2002.1014998.Suche in Google Scholar PubMed

[79] Abu Mahmoud M, Al-Jumaily A, Takruri MS. Wavelet and curvelet analysis for automatic identification of melanoma based on neural network classification. Int J Comput Inf Syst Ind Manag (IJCISIM). 2013;5:606–14. http://hdl.handle.net/10453/28022.Suche in Google Scholar

[80] Erkol B, Moss RH, Joe Stanley R, Stoecker WV, Hvatum E. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes. Skin Res Technol. 2005;11(1):17–26. 10.1111/j.1600-0846.2005.00092.x.Suche in Google Scholar PubMed PubMed Central

[81] Mangan AP, Whitaker RT. Partitioning 3d surface meshes using watershed segmentation. IEEE Trans Vis Comput Graph. 1999;5(4):308–21. 10.1109/2945.817348.Suche in Google Scholar

[82] Meyer F, Beucher S. Morphological segment. J Vis Commun Image Representation. 1990;1(1):21–46. 10.1016/1047-3203(90)90014-M.Suche in Google Scholar

[83] Grau V, Mewes AU, Alcaniz M, Kikinis R, Warfield SK. Improved watershed transform for medical image segmentation using prior information. IEEE Trans Med Imaging. 2004;23(4):447–58. 10.1109/TMI.2004.824224.Suche in Google Scholar PubMed

[84] Pesaresi M, Benediktsson JA. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans Geosci Remote Sens. 2001;39(2):309–20. 10.1109/36.905239.Suche in Google Scholar

[85] Sirts K, Goldwater S. Minimally-supervised morphological segmentation using adaptor grammars. Trans Assoc Comput Linguist. 2013;1:255–66. 10.1162/tacl_a_00225.Suche in Google Scholar

[86] Emre Celebi M, Wen Q, Hwang S, Iyatomi H, Schaefer G. Lesion border detection in dermoscopy images using ensembles of thresholding methods. Skin Res Technol. 2013;19(1):e252–8. 10.1111/j.1600-0846.2012.00636.x.Suche in Google Scholar PubMed

[87] Ganster H, Pinz P, Rohrer R, Wildling E, Binder M, Kittler H. Automated melanoma recognition. IEEE Trans Med Imaging. 2001;20(3):233–9. 10.1109/42.918473.Suche in Google Scholar PubMed

[88] Schmid P. Lesion detection in dermatoscopic images using anisotropic diffusion and morphological flooding. In 1999 International Conference on Image Processing, 1999. ICIP 99. Proceedings. Vol. 3. IEEE; 1999. p. 449–53. 10.1109/ICIP.1999.817154.Suche in Google Scholar

[89] Kapur JN, Sahoo PK, Wong AK. A new method for gray-level picture thresholding using the entropy of the histogram. Comput Vis Graph Image Process. 1985;29(3):273–85. 10.1016/0734-189X(85)90125-2.Suche in Google Scholar

[90] Emre Celebi M, Alp Aslandogan Y, Stoecker WV, Iyatomi H, Oka H, Chen X. Unsupervised border detection in dermoscopy images. Skin Res Technol. 2007;13(4):454–62. 10.1111/j.1600-0846.2007.00251.x.Suche in Google Scholar PubMed PubMed Central

[91] Garnavi R, Aldeen M, Celebi ME, Varigos G, Finch S. Border detection in dermoscopy images using hybrid thresholding on optimized color channels. Comput Med Imaging Graph. 2011;35(2):105–15. 10.1016/j.compmedimag.2010.08.001.Suche in Google Scholar PubMed

[92] Abbas Q, Garcia IF, Emre Celebi M, Ahmad W, Mushtaq Q. Unified approach for lesion border detection based on mixture modeling and local entropy thresholding. Skin Res Technol. 2013;19(3):314–9. 10.1111/srt.12047.Suche in Google Scholar PubMed

[93] Bhuiyan MA, Azad I, Uddin MK. Image processing for skin cancer features extraction. Int J Sci Eng Res. 2013;4(2):1–6. http://www.ijser.org.Suche in Google Scholar

[94] Deb K, Pratap A, Agarwal S, Meyarivan TA. A fast and elitist multiobjective genetic algorithm: Nsga-II. IEEE Trans Evol Comput. 2002;6(2):182–97. 10.1109/4235.996017.Suche in Google Scholar

[95] Xie F, Bovik AC. Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognit. 2013;46(3):1012–9. 10.1016/j.patcog.2012.08.012.Suche in Google Scholar

[96] Sonka M, Hlavac V, Boyle R. Image processing, analysis, and machine vision. Germany: Cengage Learning; 2014. 10.1007/978-1-4899-3216-7.Suche in Google Scholar

[97] Wang H, Chen X, Moss RH, Stanley RJ, Stoecker WV, Celebi ME, et al. Watershed segmentation of dermoscopy images using a watershed technique. Skin Res Technol. 2010;16(3):378–84. 10.1111/j.1600-0846.2010.00445.x.Suche in Google Scholar PubMed PubMed Central

[98] Zhou H, Schaefer G, Sadka AH, Celebi ME. Anisotropic mean shift based fuzzy c-means segmentation of dermoscopy images. IEEE J Sel Top Signal Process. 2009;3(1):26–34. 10.1109/JSTSP.2008.2010631.Suche in Google Scholar

[99] Sobiecki A, Jalba A, Boda D, Diaconeasa A, Telea AC. Gap-sensitive segmentation and restoration of digital images. In TPCG. UK: University of Groningen; 2014. p. 1–8. 10.2312/cgvc.20141200.Suche in Google Scholar

[100] Glaister J, Wong A, Clausi DA. Segmentation of skin lesions from digital images using joint statistical texture distinctiveness. IEEE Trans Biomed Eng. 2014;61(4):1220–30. 10.1109/TBME.2013.2297622.Suche in Google Scholar PubMed

[101] Zhou H, Chen M, Zou L, Gass R, Ferris L, Drogowski L, et al. Spatially constrained segmentation of dermoscopy images. In 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. IEEE; 2008. p. 800–3. 10.1109/ISBI.2008.4541117.Suche in Google Scholar

[102] Qi J, Le M, Li C, Zhou P. Global and local information based deep network for skin lesion segmentation. arXiv preprint arXiv:1703.05467; 2017. 10.48550/arXiv.1703.05467.Suche in Google Scholar

[103] Clawson KM, Morrow P, Scotney B, McKenna J, Dolan O. Analysis of pigmented skin lesion border irregularity using the harmonic wavelet transform. In 13th International Machine Vision and Image Processing Conference, 2009. IMVIP’09. IEEE; 2009. p. 18–23. 10.1109/IMVIP.2009.11.Suche in Google Scholar

[104] Umbaugh S, Wei Y-S, Zuke M. Feature extraction in image analysis. A program for facilitating data reduction in medical image classification. IEEE Eng Med Biol. 1997;16(4):62–73. 10.1109/51.603650.Suche in Google Scholar PubMed

[105] Zagrouba E, Barhoumi W. An accelerated system for melanoma diagnosis based on subset feature selection. J Comput Inf Tech. 2005;13(1):69–82. 10.2498/cit.2005.01.06.Suche in Google Scholar

[106] Rohrer R, Ganster H, Pinz A, Binder M. Feature selection in melanoma recognition. In Proceedings International Conference on Pattern Recognition (ICPR). Vol. 2. Los Alamitos, CA: IEEE Computer Society Press; 1998. p. 1668–70. 10.1109/ICPR.1998.712040.Suche in Google Scholar

[107] Celebi M, Aslandogan Y. Content-based image retrieval incorporating models of human perception. In Proceedings International Conference on Information Technology: Coding and Computing (ITCC). Vol. 2. Los Alamitos, CA: IEEE Computer Society Press; 2004. p. 241–5. 10.1109/ITCC.2004.1286639.Suche in Google Scholar

[108] Chang Y, Stanley RJ, Moss RH, Van Stoecker W. A systematic heuristic approach for feature selection for melanoma discrimination using clinical images. Skin Res Technol. 2005;11(3):165–78. 10.1111/j.1600-0846.2005.00116.x.Suche in Google Scholar PubMed PubMed Central

[109] Situ N, Yuan X, Wadhawan T, Zouridakis G. Computer-aided skin cancer screening: feature selection or feature combination. In Proc. IEEE Int. Conf. Image Process. (ICIP). Piscataway, NJ: IEEE Press; 2010. p. 273–6. 10.1155/2013/323268.Suche in Google Scholar PubMed PubMed Central

[110] Cavalcanti P, Scharcanski J. Automated prescreening of pigmented skin lesions using standard cameras. Comput Med Imaging Graph. 2011;35:481–91. 10.1016/j.compmedimag.2011.02.007.Suche in Google Scholar PubMed

[111] Iyatomi H, Norton K, Celebi M, Schaefer G, Tanaka M, Ogawa K. Classification of melanocytic skin lesions from non-melanocytic lesions. In Proc. IEEE Annual International Conference Engineering in Medicine and Biology Society (EMBC). Piscataway, NJ: IEEE Press; 2010. 10.1109/IEMBS.2010.5626500.Suche in Google Scholar PubMed

[112] Betta G, Di Leo G, Fabbrocini G, Paolillo A, Scalvenzi M. Automated application of the “7-point checklist” diagnosis method for skin lesions: Estimation of chromatic and shape parameters. In Proceedings IEEE Conference Instrumentationand and Measurement Technology (IMTC). Vol. 22(2). Piscataway, NJ: IEEE Press; 2005. p. 1818. 10.1109/IMTC.2005.1604486.Suche in Google Scholar

[113] Betta G, Di Leo G, Fabbrocini G, Paolillo A, Sommella P. Dermoscopic imageanalysis system: estimation of atypical pigment network and atypical vascular pattern. In Proceedings IEEE International Workshop on Medical Measurement and Applications. Los Alamitos, CA: IEEE Computer Society Press; 2006. p. 63–7. 10.1109/MEMEA.2006.1644462.Suche in Google Scholar

[114] Di Leo G, Liguori C, Paolillo A, Sommella P. An improved procedure for the automatic detection of dermoscopic structures in digital ELM images of skin lesions. In Proceedings IEEE Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems. Piscataway, NJ: IEEE Press; 2008. p. 190–4. 10.1109/VECIMS.2008.4592778.Suche in Google Scholar

[115] Yoshino S, Tanaka T, Tanaka M, Oka H. Application of morphology for detection of dots in tumor. In Proceedings SICE Annual Conference. Vol. 1. Piscataway, NJ: IEEE Press; 2004. p. 591–4. 10.1109/VECIMS.2008.4592778.Suche in Google Scholar

[116] Mirzaalian H, Lee TK, Hamarneh G. Learning features for streak detection in dermoscopic color images using localized radial flux of principal intensity curvature. In Proceedings IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA). IEEE; 2012. p. 97–101. 10.1109/MMBIA.2012.6164758.Suche in Google Scholar

[117] Sadeghi M, Lee T, McLean D, Lui H, Atkins M. Detection and analysis of irregular streaks in dermoscopic images of skin lesions. IEEE Trans Med Imaging. 2013;32(5):849–61. 10.1109/TMI.2013.2239307.Suche in Google Scholar PubMed

[118] Fabbrocini G, Betta G, Di Leo G, Liguori C, Paolillo A, Pietrosanto A, et al. Epiluminescence image processing for melanocytic skin lesion diagnosis based on 7-point check-list: A preliminary discussion on three parameters. Open Dermatol J. 2010;4:110–5. 10.2174/1874372201004010110.Suche in Google Scholar

[119] Stanganelli I, Brucale A, Calori L, Gori R, Lovato A, Magi S, et al. Computer-aided diagnosis of melanocytic lesions. Anticancer Res. 2005;25:4577–82. https://ar.iiarjournals.org/content/25/6C/4577.full.pdf.Suche in Google Scholar

[120] d’Amico M, Ferri M, Stanganelli I. Qualitative asymmetry measure for melanoma detection. In Proc. IEEE Int. Symp. Biomed. Imag.: From Nano to Macro. Piscataway, NJ: IEEE Press; 2004. p. 1155–8. 10.1109/ISBI.2004.1398748.Suche in Google Scholar

[121] Massimo F, Ignazio S. Size functions for the morphological analysis of melanocytic lesions. Int J Biomed Imaging. 2010;2010(1):621357. 10.1155/2010/621357.Suche in Google Scholar PubMed PubMed Central

[122] Zhou H, Chen M, Rehg J. Dermoscopic interest point detector and descriptor. In Proceedings IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Piscataway, NJ: IEEE Press; 2009. p. 1318–21. 10.1109/ISBI.2009.5193307.Suche in Google Scholar

[123] Manne R, Kantheti S, Kantheti S. Classification of skin cancer using deep learning, convolutional neural networks-opportunities and vulnerabilities-a systematic review. Int J Mod Trends Sci Technol, ISSN. 2020;6:2455–3778. 10.46501/IJMTST061118.Suche in Google Scholar

[124] Baig R, Bibi M, Hamid A, Kausar S, Khalid S. Deep learning approaches towards skin lesion segmentation and classification from dermoscopic images-a review. Curr Med Imaging. 2020;16(5):513–33. 10.2174/1573405615666190129120449.Suche in Google Scholar PubMed

[125] Hameed N, Shabut AM, Ghosh MK, Hossain MA. Multi-class multi-level classification algorithm for skin lesions classification using machine learning techniques. Expert Syst Appl. 2020;141:112961. 10.1016/j.eswa.2019.112961.Suche in Google Scholar

[126] Aljanabi M, Enad MH, Chyad RM, Jumaa FA, Mosheer AD, Ali Altohafi AS. A review ABCDE evaluated the model for decision by dermatologists for skin lesions using bee colony. In IOP Conference Series: Materials Science and Engineering. Vol. 745, No. 1. IOP Publishing; 2020. p. 012098. 10.1088/1757-899X/745/1/012098.Suche in Google Scholar

[127] Razmjooy N, Ashourian M, Karimifard M, Estrela VV, Loschi HJ, do Nascimento D, et al. Computer-aided diagnosis of skin cancer: a review. Curr Med Imaging. 2020;16(7):781–93. 10.2174/1573405616666200129095242.Suche in Google Scholar PubMed

[128] Singh L, Janghel RR, Sahu SP. Automated CAD system for skin lesion diagnosis: a review. Adv Biomed Eng Technol. 2021;4:295–320. 10.1007/978-981-15-6329-4_26.Suche in Google Scholar

[129] Mohapatra S, Abhishek NVS, Bardhan D, Ghosh AA, Mohanty S. Skin cancer classification using convolution neural networks. In Advances in distributed computing and machine learning. Singapore: Springer; 2021. p. 433–42. 10.1007/978-981-15-4218-3_42.Suche in Google Scholar

[130] Santos F, Silva F, Georgieva P. Automated diagnosis of skin lesions. In 2020 IEEE 10th International Conference on Intelligent Systems (IS). IEEE; 2020. p. 545–50. 10.1109/IS48319.2020.9200090.Suche in Google Scholar

[131] Pacheco AG, Lima GR, Salomão AS, Krohling BA, Biral IP, de Angelo GG, et al. PAD-UFES-20: a skin lesion benchmark composed of patient data and clinical images collected from smartphones. arXiv preprint arXiv:2007.00478; 2020. 10.1016/j.dib.2020.106221.Suche in Google Scholar PubMed PubMed Central

[132] Saeed J, Zeebaree S. Skin lesion classification based on deep convolutional neural networks architectures. J Appl Sci Technol Trends. 2021;2(1):41–51. 10.38094/jastt20189.Suche in Google Scholar

[133] Mabrouk MS, Sayed AY, Afifi HM, Sheha MA, Sharwy A. Fully automated approach for early detection of pigmented skin lesion diagnosis using ABCD. J Healthc Inform Res. 2020;4(2):151–73. 10.1007/s41666-020-00067-3.Suche in Google Scholar PubMed PubMed Central

[134] Birkenfeld JS, Tucker-Schwartz JM, Soenksen LR, Avilés-Izquierdo JA, Marti-Fuster B. Computer-aided classification of suspicious pigmented lesions using wide-field images. Comput Methods Prog Biomed. 2020;195:105631. 10.1016/j.cmpb.2020.105631.Suche in Google Scholar PubMed

[135] Ghalejoogh GS, Kordy HM, Ebrahimi F. A hierarchical structure based on Stacking approach for skin lesion classification. Expert Syst Appl. 2020;145:113127. 10.1016/j.eswa.2019.113127.Suche in Google Scholar

[136] Mohapatra S, Abhishek NVS, Bardhan D, Ghosh AA, Mohanty S. Comparison of MobileNet and ResNet CNN architectures in the CNN‐based skin cancer classifier model. Mach Learn Healthc Appl. 2021;33:169–86. 10.1002/9781119792611.ch11.Suche in Google Scholar

[137] Surowka G. Supervised learning of melanocytic skin lesion images. In: Piatek L, editor. Proc. Conference on Human System Interactions (HSI). Piscataway, NJ: IEEE Press; 2008. p. 121–5. 10.1109/HSI.2008.4581420.Suche in Google Scholar

[138] Maglogiannis I, Zafiropoulos E. Characterization of digital medical images utilizing support vector machines. BMC Med Inform Decis Mak. 2004;4:1–9. 10.1186/1472-6947-4-4.Suche in Google Scholar PubMed PubMed Central

[139] Rahman M, Bhattacharya P. An integrated and interactive decision support system for automated melanoma recognition of dermoscopic images. Comput Med Imaging Graph. 2010;34(6):479–86. 10.1016/j.compmedimag.2009.10.003.Suche in Google Scholar PubMed

[140] Baldi A, Murace R, Dragonetti E, Manganaro M, Guerra O, Bizzi S, et al. Definition of an automated content-based image retrieval (CBIR) system for the comparison of dermoscopic images of pigmented skin lesions. Biomed Eng Online. 2009;8(1):18–28. 10.1186/1475-925X-8-18.Suche in Google Scholar PubMed PubMed Central

[141] Larabi M, Richard N, Fernandez-Maloigne C. Using combination of color, texture and shape features for image retrieval in melanomas databases. In: Beretta GB, Schettini R, editors. In Proc. SPIE, ser. Internet imaging III. Vol. 4672. San Jose, CA: SPIE; 2002. 147–56. 10.1117/12.452668.Suche in Google Scholar

[142] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8. 10.1038/nature21056.Suche in Google Scholar PubMed PubMed Central

[143] Codella NC, Gutman D, Celebi ME, Helba B, Marchetti MA, Dusza SW, et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). 2018 IEEE 15th International Symposium on Biomedical Imaging. IEEE; 2018. p. 168–72. 10.1109/ISBI.2018.8363547.Suche in Google Scholar

[144] Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data. 2018;5:180161. 10.1038/sdata.2018.161.Suche in Google Scholar PubMed PubMed Central

[145] Combalia M, Codella NC, Rotemberg V, Helba B, Vilaplana V, Reiter O, et al. BCN20000: Dermoscopic lesions in the wild. arXiv preprint arXiv:1908.02288; 2019. 10.48550/arXiv.1908.02288.Suche in Google Scholar

[146] Argenziano G, Soyer HP, De Giorgio V, Piccolo D, Carli P, Delfino M, et al. Interactive atlas of dermoscopy. Milan, Italy: Edra Medical Publishing & New Media; 2000. https://espace.library.uq.edu.au/view/UQ:229410.Suche in Google Scholar

[147] A cognitive prosthesis to aid focal skin lesion diagnosis. Accessed: July 12, 2024. [Online]. Available: https://homepages.inf.ed.ac.uk/rbf/DERMOFIT/.Suche in Google Scholar

[148] Mendonça T, Ferreira PM, Marques JS, Marcal AR, Rozeira J. PH2-A dermoscopic image database for research and benchmarking. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2013. p. 5437–40. 10.1109/EMBC.2013.6610779.Suche in Google Scholar PubMed

[149] Giotis I, Molders N, Land S, Biehl M, Jonkman MF, Petkov N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst Appl. 2015;42(19):6578–85. 10.1016/j.eswa.2015.04.034.Suche in Google Scholar

[150] Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Investig Dermatol. 2018;138(7):1529–38. 10.1016/j.jid.2018.01.028.Suche in Google Scholar PubMed

[151] Han SS, Park GH, Lim W, Kim MS, Na JI, Park I, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS One. 2018;13(1):e0191493. 10.1371/journal.pone.0191493.Suche in Google Scholar PubMed PubMed Central

[152] Yang J, Sun X, Liang J, Rosin PL. Clinical skin lesion diagnosis using representations inspired by dermatologist criteria. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. p. 1258–66. 10.1109/CVPR.2018.00137.Suche in Google Scholar

[153] Yang J, Wu X, Liang J, Sun X, Cheng MM, Rosin PL, et al. Self-paced balance learning for clinical skin disease recognition. IEEE Trans Neural Netw Learn Syst. 2019;31(8):2832–46. 10.1109/TNNLS.2019.2917524.Suche in Google Scholar PubMed

[154] Dermnet nz. Accessed: Jun 15, 2024. [Online]. Available: https://www.dermnetnz.org/.Suche in Google Scholar

[155] Kawahara J, Daneshvar S, Argenziano G, Hamarneh G. Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J Biomed Health Inform. 2018;23(2):538–46. 10.1109/JBHI.2018.2824327.Suche in Google Scholar PubMed

[156] The cancer genome atlas program. Accessed: May 10, 2024. [Online]. Available: https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga/history/policies/tcga-human-subjects-data-policies.pdf.Suche in Google Scholar

[157] Codella NC, Nguyen QB, Pankanti S, Gutman DA, Helba B, Halpern AC, et al. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J Res Dev. 2017;61(4/5):5. 10.1147/JRD.2017.2708299.Suche in Google Scholar

[158] Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836–42. 10.1093/annonc/mdy166.Suche in Google Scholar PubMed

[159] Brinker TJ, Hekler A, Enk AH, Klode J, Hauschild A, Berking C, et al. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur J Cancer. 2019;113:47–54. 10.1016/j.ejca.2019.04.001.Suche in Google Scholar PubMed

[160] Tschandl P, Rosendahl C, Akay BN, Argenziano G, Blum A, Braun RP, et al. Expert-level diagnosis of nonpigmented skin cancer by combined convolutional neural networks. JAMA Dermatol. 2019;155(1):58–65. 10.1001/jamadermatol.2018.4378.Suche in Google Scholar PubMed PubMed Central

[161] Maron RC, Weichenthal M, Utikal JS, Hekler A, Berking C, Hauschild A, et al. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur J Cancer. 2019;119:57–65. 10.1016/j.ejca.2019.06.013.Suche in Google Scholar PubMed

[162] Haenssle HA, Fink C, Toberer F, Winkler J, Stolz W, Deinlein T, et al. Man against machine reloaded: performance of a market-approved convolutional neural network in classifying a broad spectrum of skin lesions in comparison with 96 dermatologists working under less artificial conditions. Ann Oncol. 2020;31(1):137–43. 10.1016/j.annonc.2019.10.013.Suche in Google Scholar PubMed

[163] Tschandl P, Codella N, Akay BN, Argenziano G, Braun RP, Cabo H, et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. Lancet Oncol. 2019;20(7):938–47. 10.1016/S1470-2045(19)30333-X.Suche in Google Scholar PubMed PubMed Central

Received: 2024-08-23
Accepted: 2024-12-04
Published Online: 2025-02-20

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. Synergistic effect of artificial intelligence and new real-time disassembly sensors: Overcoming limitations and expanding application scope
  3. Greenhouse environmental monitoring and control system based on improved fuzzy PID and neural network algorithms
  4. Explainable deep learning approach for recognizing “Egyptian Cobra” bite in real-time
  5. Optimization of cyber security through the implementation of AI technologies
  6. Deep multi-view feature fusion with data augmentation for improved diabetic retinopathy classification
  7. A new metaheuristic algorithm for solving multi-objective single-machine scheduling problems
  8. Estimating glycemic index in a specific dataset: The case of Moroccan cuisine
  9. Hybrid modeling of structure extension and instance weighting for naive Bayes
  10. Application of adaptive artificial bee colony algorithm in environmental and economic dispatching management
  11. Stock price prediction based on dual important indicators using ARIMAX: A case study in Vietnam
  12. Emotion recognition and interaction of smart education environment screen based on deep learning networks
  13. Supply chain performance evaluation model for integrated circuit industry based on fuzzy analytic hierarchy process and fuzzy neural network
  14. Application and optimization of machine learning algorithms for optical character recognition in complex scenarios
  15. Comorbidity diagnosis using machine learning: Fuzzy decision-making approach
  16. A fast and fully automated system for segmenting retinal blood vessels in fundus images
  17. Application of computer wireless network database technology in information management
  18. A new model for maintenance prediction using altruistic dragonfly algorithm and support vector machine
  19. A stacking ensemble classification model for determining the state of nitrogen-filled car tires
  20. Research on image random matrix modeling and stylized rendering algorithm for painting color learning
  21. Predictive models for overall health of hydroelectric equipment based on multi-measurement point output
  22. Architectural design visual information mining system based on image processing technology
  23. Measurement and deformation monitoring system for underground engineering robots based on Internet of Things architecture
  24. Face recognition method based on convolutional neural network and distributed computing
  25. OPGW fault localization method based on transformer and federated learning
  26. Class-consistent technology-based outlier detection for incomplete real-valued data based on rough set theory and granular computing
  27. Detection of single and dual pulmonary diseases using an optimized vision transformer
  28. CNN-EWC: A continuous deep learning approach for lung cancer classification
  29. Cloud computing virtualization technology based on bandwidth resource-aware migration algorithm
  30. Hyperparameters optimization of evolving spiking neural network using artificial bee colony for unsupervised anomaly detection
  31. Classification of histopathological images for oral cancer in early stages using a deep learning approach
  32. A refined methodological approach: Long-term stock market forecasting with XGBoost
  33. Enhancing highway security and wildlife safety: Mitigating wildlife–vehicle collisions with deep learning and drone technology
  34. An adaptive genetic algorithm with double populations for solving traveling salesman problems
  35. EEG channels selection for stroke patients rehabilitation using equilibrium optimizer
  36. Influence of intelligent manufacturing on innovation efficiency based on machine learning: A mechanism analysis of government subsidies and intellectual capital
  37. An intelligent enterprise system with processing and verification of business documents using big data and AI
  38. Hybrid deep learning for bankruptcy prediction: An optimized LSTM model with harmony search algorithm
  39. Construction of classroom teaching evaluation model based on machine learning facilitated facial expression recognition
  40. Artificial intelligence for enhanced quality assurance through advanced strategies and implementation in the software industry
  41. An anomaly analysis method for measurement data based on similarity metric and improved deep reinforcement learning under the power Internet of Things architecture
  42. Optimizing papaya disease classification: A hybrid approach using deep features and PCA-enhanced machine learning
  43. Handwritten digit recognition: Comparative analysis of ML, CNN, vision transformer, and hybrid models on the MNIST dataset
  44. Multimodal data analysis for post-decortication therapy optimization using IoMT and reinforcement learning
  45. Predicting early mortality for patients in intensive care units using machine learning and FDOSM
  46. Uncertainty measurement for a three heterogeneous information system based on k-nearest neighborhood: Application to unsupervised attribute reduction
  47. Genetic algorithm-based dimensionality reduction method for classification of hyperspectral images
  48. Power line fault detection based on waveform comparison offline location technology
  49. Assessing model performance in Alzheimer's disease classification: The impact of data imbalance on fine-tuned vision transformers and CNN architectures
  50. Hybrid white shark optimizer with differential evolution for training multi-layer perceptron neural network
  51. Review Articles
  52. A comprehensive review of deep learning and machine learning techniques for early-stage skin cancer detection: Challenges and research gaps
  53. An experimental study of U-net variants on liver segmentation from CT scans
  54. Strategies for protection against adversarial attacks in AI models: An in-depth review
  55. Resource allocation strategies and task scheduling algorithms for cloud computing: A systematic literature review
  56. Latency optimization approaches for healthcare Internet of Things and fog computing: A comprehensive review
  57. Explainable clustering: Methods, challenges, and future opportunities
Heruntergeladen am 26.12.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jisys-2024-0381/html
Button zum nach oben scrollen