Home A novel defect generation model based on two-stage GAN
Article Open Access

A novel defect generation model based on two-stage GAN

  • Yuming Zhang , Zhongyuan Gao , Chao Zhi EMAIL logo , Mengqi Chen , Youyong Zhou , Shuai Wang , Sida Fu EMAIL logo and Lingjie Yu EMAIL logo
Published/Copyright: September 15, 2022
Become an author with De Gruyter Brill

Abstract

The fabric defect models based on deep learning often demand numerous training samples to achieve high accuracy. However, obtaining a complete dataset containing all possible fabric textures and defects is a big challenge due to the sophisticated and various fabric textures and defect forms. This study created a two-stage deep pix2pixGAN network called Dual Deep pix2pixGAN Network (DPGAN) to address the above problem. The defect generation model was trained based on the DPGAN network to automatically “transfer” defects from defected fabric images to clean, defect-free fabric images, thus strengthening the training data. To evaluate the effectiveness of the defect generation model, extensive comparative experiments were conducted to assess the performance of the fabric defect detection before and after data enhancement. The results indicate that the detection accuracy was improved regarding the belt_yarn, hole, and stain defect.

1 Introduction

Fabric defect detection plays a crucial role in quality control in textile production. Notwithstanding methods based on human vision that can detect different forms of defects, they have obvious disadvantages such as low efficiency, poor real-time performance, and low accuracy (1). Therefore, the intelligent fabric defect detection has become a research trend in recent years.

Many models based on computer vision and image processing have been applied in fabric defect detection. Song et al. (2) described an improved fabric defect detection method based on the membership degree of each fabric region (TPA), in which an iterative threshold method was used to ensure precise and accurate detection. Jing et al. (3) constructed optimal Gabor filters through genetic algorithms to achieve accurate defect detection on the pattern fabric. Wang et al. (4) proposed a novel method for sequential detection of image defects in patterned fabrics. The defect image blocks could be identified according to the minimum principle of structural similarity, and the defect position could be located by distance measurement and threshold segmentation.

The feature extraction algorithms mainly rely on a manual design extractor in traditional methods, which requires professional knowledge and a complex tuning process. Furthermore, each of the above methods was aimed at a specific application, causing its generalization ability and robustness to be unsatisfactory. Thus, many scholars have applied neural networks to fabric defect detection. Li et al. (5) proposed a compact convolutional neural network (CNN) architecture to detect a few common fabric defects. The proposed architecture used several microarchitectures with multi-layer perceptron to optimize the network and improve detection accuracy. Zhao et al. (6) presented an integrated CNN model based on long-short-term visual memory, which extracted three types of features. Ouyang et al. (7) introduced a pair of potential activation layers into CNN. The candidate defect map was generated by combining the image pre-processing and fabric matrix sequence determination, which makes the accuracy of the detection results more than 98%. Huang et al. (8) proposed an efficient CNN for defect segmentation and detection. It only needs a few defect samples combined with standard samples to learn the potential feature of defects and obtain the location of defects with high accuracy. Chen et al. (9) proposed Improved Faster R-CNN for fabric defect detection based on the Gabor filter with genetic algorithm optimization. Many general optimization methods, such as attention mechanism, DIoU-NMS, and cosine annealing scheduler, are applied and verified in the fabric defect localization task.

For deep learning networks, the detection ability relies primarily on the size and variety of the training datasets (10). Theoretically speaking, a training dataset covering all types of defects and textures can help build a highly accurate detection model. However, due to the variety of fabric textures and defects, it is very difficult to obtain a complete dataset containing all possible fabric textures and defects, which restricts the further development and application of deep learning in fabric defect detection.

Therefore, to enhance the accuracy of the detection model, many scholars have conducted extensive research on data augmentation. Typical data augmentation methods involve flipping, rotating, zooming, random cropping, color jittering, and adding noise (11,12). Moshkov et al. (13) improved the measurement time of single-cell images by rotating and flipping. Wan and Goudos (14) enhanced the fruit datasets by changing the brightness and rotation, thus entailing more accurate and faster detection on the improved network of Faster R-CNN. Ogawa et al. (15) enhanced the datasets using horizontal and vertical flipping, Gaussian blur, and brightness changes to improve the performance of DCNN for radiation and image classification. Teramoto et al. (16) used methods of tailoring and filtering to enhance data for the accuracy of lung cancer testing. However, these methods do not enhance the image information. Only minor transformations were performed on the original datasets subject to many constraints with negligible generalization effect (17,18,19).

The generative adversarial network (GAN) was pioneered by Goodfellow et al. (20) in 2014. GAN is a generative model based on zero-sum game theory, mainly used to generate realistic images and expand the datasets. In recent years, Ma et al. (21) proposed a new FusionGAN network that can incorporate thermal radiation of infrared images and the textures of visible images. Extensive results demonstrated that their strategy could generate clean images that do not suffer from noise caused by upsampling of infrared information. Wang et al. (22) proposed a novel method based on 3D conditional GAN (3D c-GANs) to estimate the high-quality full-dose positron emission tomography images from low-dose ones. Experimental results showed that their proposed 3D c-GANs method outperformed the benchmark method and achieved much better performance than the state-of-the-art methods in qualitative and quantitative measures. Rivenson et al. (23) adopted a GAN model to transform wide-field autofluorescence images of unlabeled tissue sections into images equivalent to the bright-field images. The result verified that the generated image does not differ from the standard image.

Furthermore, many algorithms aimed to automatically synthesize defected fabric samples based on GAN have been proposed these years. Hu et al. (24) presented an unsupervised method for detecting fabric defects based on a deep convolutional generative adversarial network (DCGAN). Liu et al. (25) used a multi-layer GAN model to synthesize reasonable defects in defect-free samples and verified the effectiveness of the method through comparative experiments. Li et al. (26) proposed ManiGAN, which reconstructed the original image through input text information.

This study proposed an enhancement method for fabric defective datasets based on the Dual Deep pix2pixGAN Network (DPGAN). The experimental results showed that our method could effectively improve the accuracy of defect detection. The main contributions of this study are concluded as follows: (1) a fabric defect generation model was designed to automatically generate defects on clean, defect-free fabric images; (2) different from other models, which are limited to generating defects on fabric images with particular textures, this model can be trained to generate defects on any texture; (3) a two-stage network structure was proposed to make the generated defects more fused into the fabric background.

The rest of the study is organized as follows. In Section 2, the GAN network and the target detection model are briefly introduced. Section 3 details the generative image methods proposed in this study. Section 4 reports and discusses our experimental results. Finally, we conclude our study in Section 5.

2 Related works

2.1 GAN

GAN is a generative model based on zero-sum game theory. It contains a generative network G (generator) and a discriminating network D (discriminator). Specifically, G is a generative network that receives a random noise z (random number) to generate images. D is a judging network to judge whether a picture is “real.” The input parameter of D is x representing an image, and the output D(x) represents the probability of x as a real image. If the output D(x) is 1, it represents that the output image is 100% a real image, while 0 means it is impossible to be a real image. During the training process, the goal of G is to generate fake images, while the goal of D is to identify fake images generated by G. In this way, G and D constitute a dynamic “game process,” with the final equilibrium point called the Nash equilibrium point. The core objective function of GAN can be expressed as the following minimax formulation:

(1) min G max D V ( D , G ) = E x p data ( x ) [ log D ( x ) ] + E x p z ( x ) [ log ( 1 D ( G ( z ) ) ) ]

The loss function shown in Eq. 1 is composed of two parts: the expectations of the logarithmic distribution function of the generator and discriminator, where x represents the real sample, z is the random noise and also the input of the generator, G(z) denotes the generated samples from the generator, p represents the distribution of data, and E represents the expected value in the distribution of samples. The purpose of the generator is to make D(G(z)) as large as possible, i.e., to make V(D,G) as small as possible. In contrast, the purpose of the discriminator is to make D(x) as large as possible while D(G(z)) is as small as possible, i.e., to make V(D,G) as large as possible.

GAN can generate new samples, but we cannot control the sample type because we enter only one random noise, z. Thus came a variant of GAN, Conditional Generative Adversarial Nets (CGAN). Compared to GAN networks, CGAN adds condition y to the generator and the discriminator, which helps guide the generator to generate the images we need. However, in the case of defected fabric samples, one condition is insufficient due to the wide variety of textures and defects. The pix2pixGAN is a typical image-to-image translation network. The input and output of pix2pixGAN share a large amount of information. When we generate defects on a clean, defect-free fabric image, only a small area of the image (defect area) would be changed, the remaining texture information will be preserved.

Furthermore, since the texture and defect forms vary over different fabrics, it is difficult for the network to extract common features for all the fabric textures. In this regard, the pix2pixGAN model, which is trained based on the paired images, is suitable for fabric defect generation. The core objective function of pix2pixGAN can be expressed as the following minimax formulation:

(2) L cGAN ( G , D ) = E x , y [ log D ( x , y ) ] + E x , z [ log ( 1 D ( x , G ( x , z ) ) ) ]

(3) L L 1 ( G ) = E x , y , x [ y G ( x , z ) 1 ]

(4) G = arg min G max D L cGAN ( G , D ) + λ L L 1 ( G )

where y represents the real image, and x represents the input image. During the training of the pix2pix, pairs of images (x and y) are needed. z refers to the random noise, G is the generator, D is the discriminator, and G(x) represents the generated image.

2.2 Detection model

Generally speaking, current target detection algorithms can be roughly divided into two categories: two-stage and one-stage. The detection algorithm of two-stage divides the detection problem into two stages. First, the candidate region CNN features are produced, then the regions are classified, and the target location is refined. The core component of the two-stage is the CNN and Region Proposal Networks (RPN). VGG, ResNet, and GoogleNet are the commonly used backbone networks for deep feature extraction. The two-stage detection algorithm, including R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN, has a low error and leakage rates but is slow and cannot meet the real-time detection scene.

This study adopted three popular object detection models, Faster R-CNN, YOLO V3, and M2det, to obtain the fabric defect detection results before and after data enhancement.

Faster R-CNN has incorporated several moduli, including feature extraction, proposal extraction, bounding box regression, and classification, into one network, greatly improving the all-around performance, particularly in terms of the detection speed. Specifically, Faster R-CNN can be divided into four main content: Convolutional layers, RPN, ROI Pooling, and Classification.

Compared to the Faster R-CNN network, the YOLO V3 network is slightly inadequate in the target detection accuracy, but the detection speed has been dramatically improved. The YOLO V3 achieves a satisfactory detection effect of small objects through multi-scale object detection, mainly composed of the Darknet-53. The Darknet-53 is first performed with a primary convolutional kernel size 3 × 3 to obtain a feature layer. The convolutions of 1 × 1 and 3 × 3 are then operated on the feature, followed by the output proceeding to several layers consisting of the residual structure. The above network is deepened by continuous 1 × 1 convolution layers and the superposition of the residual edges. Then, the FPN feature pyramid is constructed for enhancing feature extraction, followed by the position regression and classification used to identify the object based on the obtained multi-level features.

M2det is a one-stage target detection model. In the M2det structure, the multi-level feature pyramid (MLFLPN) is proposed mainly to address the shortcomings of other feature pyramids. M2Det uses the backbone network and the MLFLPN to extract image features. Then, an SSD-similar way is adopted to predict dense bounding boxes and category scores. The final detection results are obtained via non-maximum suppression, which consists of 3 modules, including the feature fusion module, refinement U module, and scale feature aggregation module.

3 Methods

In this study, a fabric defect generation model called DPGAN was designed. The proposed DPGAN method mainly includes two stages, and the overview of DPGAN is displayed in Figure 1.

Figure 1 
               The overview of DPGAN.
Figure 1

The overview of DPGAN.

3.1 Stage 1

The first stage of DPGAN is designed to train a pre-train model to generate fabric defects on a clean defect-free fabric image. First, the position that is supposed to generate the potential defect was artificially marked in white. The position could be selected freely. In this stage, we realized a pseudo-attention mechanism, allowing the network to pay more attention to the defective position to extract the defective features. After that, the generator network was constructed based on a U-net structure. It can acquire detailed information about the image at high level of the network and low-frequency information about the image at the bottom level, while retaining the information of various levels through skip connection. The generator network structure is shown in Figure 2.

Figure 2 
                  The generator network structure.
Figure 2

The generator network structure.

An image size of 512 × 512 × 3 was first inputted into the network, yielding the output after a series of downsampling, upsampling, and skip connections. During the process, the convolution kernel size is 4, the stride is 2, and the padding is the same. We used batch normalization throughout to accelerate the convergence, except for the first-step convolution. Then, the features obtained from the previous convolution were combined with the corresponding layers. After that, the deconvolution continued. The dropout function, set to be 0.5, was used in the first 4 deconvolution layers to prevent over-fitting. ReLU was selected as the activation function for all layers except for the last layer (which adopted the tanh).

3.2 Stage 2

The second stage of DPGAN was aimed to train a fine-tuning model which could optimize the fabric defects generated in the first stage. The generator network structure in Stage 2 was designed similar to that in Stage 1. However, the input in Stage 2 is the pre-generated fabric defect image outputted at Stage 1. At the same time, the output is the after-fine tuning fabric defect image with more natural defect. In this stage, to make G(x) more similar to y, we strengthened λ to 500.

We provided the complete inheritance of the texture by tuning the appropriate L1 parameters. The loss for the whole model is shown as follows:

(5) L PGAN ( G , D ) = E x , y [ log D ( x , y ) ] + E x [ log ( 1 D ( x , G ( x ) ) ) ]

(6) L L 1 ( G ) = E x , y [ y G ( x ) 1 ]

(7) G = arg min G max D L cGAN ( G , D ) + λ L L 1 ( G )

where x is the fabric image with the potential defect position marked white, and y is the real defective fabric dataset. This loss function aims to train G and D so that G can generate better defects, while D can better distinguish between a real and a false image. As adversarial training proceeds, the images generated with x get closer to the real y. x and y in Eq. 6 are consistent with L PGAN. L1 loss, which is to constrain the difference between the generated image and y. The overall loss G* shown in Eq. 7 can control the similarity between the generated image and y by adjusting the λ. In this study, the λ value was set to be 200.

4 Results and discussion

4.1 Original datasets

The datasets were acquired from two different resources, which are color-woven fabric from the literature (Texture_1) and the datasets of “Xuelang Manufacturing AI Challenge” (Texture_2) (27).

The collected fabric images were all scaled to 600 × 600. The original dataset of Texture_1 consists of 856 fabric images with belt_yarn defects, 372 fabric images with hole defects, and 308 fabric images with stain defects. After the first data enhancement using flip and color transformations, the training set for DPGAN includes 856 fabric images with belt_yarn defects, 744 fabric images with hole defects, and 616 fabric images with stain defects. Parts of collected fabric defected images are shown in Figure 3.

Figure 3 
                  Parts of collected fabric defected images: (a) fabrics defected images with Texture_1; and (b) fabrics defected images with Texture_2.
Figure 3

Parts of collected fabric defected images: (a) fabrics defected images with Texture_1; and (b) fabrics defected images with Texture_2.

4.2 The evaluation of the generated image

In this study, the original training dataset includes two types of textures, which are Texture_1 and Texture_2. The number of fabric images with Texture_1 is much larger than that with Texture_2. Concerning the imbalance of the dataset used in fabric defect detection, a fabric defect generation model based on DPGAN was trained based on Texture_1. Using DPGAN, we can generate different forms of defects on a clean fabric image, thus enhancing the dataset of defected fabric images with Texture_2. We mainly focus on the three common fabric defective types of belt yarn, stains, and holes.

The fabric defect generation model was trained on a workstation with a TitanRTX graphics chipset and 24 GB memory. The model was coded based on TensorFlow (2.2.1) framework and Python3.6 environment, using the Jupyter notebook as the platform. A training set with 2,480 paired image samples was used to train the generation model, including 1,100 paired images with belt_yarn defects, 744 paired images with hole defects, and 636 paired images with stained defects. In addition, a total number of 936 paired images were selected as the test set to verify the effectiveness of the model, consisting of 312 paired images with belt_yarn defects, 312 paired images with hole defects, and 312 paired images with stain defects. The generator fabric defect images are presented in Figure 4. This figure shows that the generated defects can naturally fuse into the fabric background after Stage 2 of DPGAN.

Figure 4 
                  Samples of generated fabric defect images by using DPGAN. (a) The generated fabric defect images after Stage 1; and (b) the generated fabric defect images after Stage 2.
Figure 4

Samples of generated fabric defect images by using DPGAN. (a) The generated fabric defect images after Stage 1; and (b) the generated fabric defect images after Stage 2.

The first experiment was carried out to assess the quality of the generated fabric defect image. This experiment used DCGAN, WGAN, and pix2PixGAN to generate fabric defects for comparison. Two indicators, peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) were used to evaluate the quality of the generated image. The PSNR is an image quality evaluation index widely used to qualify the difference between two images compressively. However, the PSNR can only assess the pixel differences between two images, ignoring human visual perception, such as the contrast, brightness, and regional perception. Therefore, the SSIM value was adopted in this work to present the generated image quality together with the PSNR value. The SSIM value can assess the structural similarity between two images. Compared with the PSNR value, the SSIM value can more scientifically demonstrate the differences between the generated image and the ground truth since the defect forms may vary. Referring to the two indicators, a higher value of PSNR means a higher quality of the generated images. Similarly, a higher value of SSIM also indicates a better-generated image.

The two indicators, PSNR and SSIM, can be calculated as follows:

(8) MSE = 1 H × W i = 1 H j = 1 W ( X ( i , j ) Y ( i , j ) ) 2 PSNR = 10 log 10 ( 2 n 1 ) 2 MSE

(9) l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 c ( x , y ) = 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 s ( x , y ) = σ x y + C 3 σ x σ y + C 3 SSIM ( X , Y ) = l ( x , y ) c ( x , y ) s ( x , y )

where MSE represents the mean square error, and X(i, j) and Y(i, j) indicate the gray value at the image coordinate (i, j) for x and y, respectively; W and H refer to the weight and height of the image, respectively; n is 8 in this study; l(x, y) indicates the image brightness, c(x, y) represents the image contrast, and s(x, y) refers to the image structural similarity; σ x and σ y are the standard deviations, µ x, and µ y are the mean values, and σ xy represents the covariance of the image; C 1 , C 2 , and C 3 are the constant values to avoid zero in the denominator.

The evaluation results of the generated fabric defect image are illustrated in Table 1. It can be found from Table 1 that the proposed DPGAN can achieve the best performance.

Table 1

The evaluation results of the generated fabric defect image

Generation model Evaluation metrics
Texture_1 Texture_2
DCGAN 8.980 0.00003
WGAN 8.170 0.00290
Pix2pixGAN 12.530 0.01000
DPGAN (proposed) 25.826 0.90960

4.3 The evaluation of the detection results

To verify to what extent the proposed fabric defect generation method can improve the accuracy of detection model, comparative experiments were also conducted to evaluate the detection accuracy before and after data enhancement based on the same detection model.

Three detection models were selected in this part: Faster R-CNN, YOLO V3, and M2det. In comparative experiment 1, the original training set containing only Texture_1 was used to train the three detection models. In contrast, the enhanced training set containing both Texture_1 and Texture_2 was used in comparative experiment 2. The flowchart of the comparative experiments is illustrated in Figure 5. The experiments in this study were conducted on the Titan RTX GPU (24 G) and TensorFlow (2.1.0) platform with Python 3.6. The flowchart of the fabric detection experiments is illustrated in Figure 5.

Figure 5 
                  The flowchart of the fabric detection experiments.
Figure 5

The flowchart of the fabric detection experiments.

The detection models used 1,800 fabric images with belt_yarn, hole, and stain defects as the original training set. The 312 defect-free fabric images with Texture_2 were then inputted into the trained DPGAN to generate new defected images. After the data enhancement using DPGAN, a training set containing 3,632 fabric images was established. The training set before and after data enhancement was termed training set 1 and training set 2, respectively. The number of image samples in the datasets is illustrated in Table 2.

Table 2

The number of image samples in the datasets

Defect type Training set 1 (before data enhancement) Training set 2 (after data enhancement) Test set
Texture_1 Texture_2 Texture_1 Texture_2 Texture_1 Texture_2
Belt_yarn 600 0 600 612 85 54
Hole 600 0 600 624 82 98
Stain 600 0 600 596 81 27
Total 1,800 0 1,800 1,832 248 179

In this section, the average precision (AP) and mean average precision (mAP) were adopted as the evaluation indicators to assess the performance of the three detection models: Faster R-CNN, YOLO V3, and M2det. In general, precision and recall are fish-bear palm relations. Specifically, a higher recall is often along with a lower accuracy. Thus, the AP value, which represents the area enclosed by the curve drawn by the points whose x coordinates and y coordinates are precision and recall values, respectively, was used as the comprehensive evaluation indicator. The mAP value indicates the average of AP values for each defect type. The precision and recall values can be calculated as follows:

(10) Precision = TP TP + FP

(11) Recall = TP TP + FN

where TP is an example where the classifier considers positive and indeed a positive sample, FP is an example where the classifier considers positive but not a positive sample. FN is an example where the classifier considers negative but is not a negative sample.

Based on the dataset in Table 2, three detection models, Faster R-CNN, YOLO V3, and M2det, were adopted to detect the fabric defects. Furthermore, we also applied the proposed image generation model to the fabric defect detection model, termed Faster GG R-CNN, as reported by our previous study (9). The input image size was 512 × 512 for Faster R-CNN and Faster GG R-CNN, 416 × 416 for YOLO V3, and 320 × 320 for M2det. In addition, the training epochs were set to 100, the learning rate was 0.0001, and the confidence was 0.5. The fabric defect detection results of the three models are shown in Table 3.

Table 3

The fabric defect detection results of the three models

Detection model Training set 1 (AP%) (before data enhancement) Training set 2 (AP%) (after data enhancement)
Belt_yarn Hole Stain mAP Belt_yarn Hole Stain mAP
Faster R-CNN 73 75 62 70 84 79 65 76
YOLO V3 49 60 49 53 72 66 50 63
M2det 66 73 64 68 79 81 63 74
Faster GG R-CNN 88 91 77 85 92 94 81 89

From Table 3, we can see that compared with the training set 1, the mAP of the three detection models on the training set 2 is improved by 6%, 10%, 6%, and 4%, respectively. The YOLO V3 has the best improvement by 10%, mainly due to the great improvement of the Belt_yarn defects. In contrast, the promotion effect of stain defects is not significant, which might be caused by the fact that stain characteristics are more difficult to be learned by the neural network, not only by DPGAN but also by the detection models. Overall, the results showed that our method could remarkably improve the detection models’ accuracy.

5 Conclusion

This study proposed a fabric defect generation method based on an improved pix2pixGAN network termed DPGAN. The U-net structure layer of pix2pixGAN network was first deepened to obtain more global information; then, two deepened U-net structures were cascaded to construct the overall structure of DPGAN, which can help to create more detailed information. Two different datasets were acquired from two different resources to verify the effectiveness of the proposed fabric defect generation method. One dataset, called Texture_1, was used to train the fabric defect generation model. The well-trained model was then used to create new defects on a clean, defect-free fabric image, thus enhancing the training dataset of Texture_2. Three detection models, Faster R-CNN, YOLO V3, and M2det, were adopted to evaluate the detection accuracy before and after data enhancement. The results indicate that the performances of the detection models were all noticeably improved after the data enhancement using the proposed fabric defect generation method.

  1. Funding information: This work was supported by the financial support from the National Natural Science Foundation of China (Grant No. 51903199), Innovation Capability Support Program of Shaanxi (Program No. 2022KJXX-40), Natural Science Basic Research Program of Shaanxi (No. 2019JQ-182, 2018JQ5214, and 2021JQ-691), Outstanding Young Talents Support Plan of Shaanxi Universities (2020), Scientific Research Program Funded by Shaanxi Provincial Education Department (Program No. 18JS039), Science and Technology Guiding Project of China National Textile and Apparel Council (No. 2020044 and No. 2020046), the Graduate Scientific Innovation Fund for Xi’an Polytechnic University (chx2021004), the Open Project Program of Key Laboratory of Eco-textiles, Ministry of Education, Jiangnan University (KLET2010), and the Work and Research Project of University Laboratory of Zhejiang Province (YB202110).

  2. Author contributions: Yuming Zhang: investigation and writing – original draft; Zhongyuan Gao: software and writing – original draft; Chao Zhi: writing – review and editing, methodology, and funding acquisition; Mengqi Chen: formal analysis; Youyong Zhou: validation; Shuai Wang: data curation; Sida Fu: project administration; Lingjie Yu: conceptualization, resources, supervision, writing – review and editing, and funding acquisition.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: Some data, models, or code generated during the study are available from the corresponding author by request.

References

(1) Srinivasan K, Dastoor PH, Radhakrishnaiah P, Jayaraman S. FDAS: A knowledge-based framework for analysis of defects in woven textile structures. J Text I. 1992;83:431–48. 10.1080/00405009208631217.Search in Google Scholar

(2) Song LW, Li RZ, Chen SQ. Fabric defect detection based on membership degree of regions. IEEE Access. 2020;8:48752–60. 10.1109/ACCESS.2020.2978900.Search in Google Scholar

(3) Jing JF, Yang PP, Li PF, Kang XJ. Supervised defect detection on textile fabrics via optimal Gabor filter. J Ind Text. 2014;44(1):40–57. 10.1177/1528083713490002.Search in Google Scholar

(4) Wang WZ, Deng N, Xin BJ. Sequential detection of image defects for patterned fabric. IEEE Access. 2020;8:174751–62. 10.1109/ACCESS.2020.3024695.Search in Google Scholar

(5) Li YY, Zhang D, Lee DJ. Automatic fabric defect detection with a wide-and-compact network. Neurocomputing. 2019;329:329–38. 10.1016/j.neucom.2018.10.070.Search in Google Scholar

(6) Zhao YD, Hao KR, He HB, Tang XS, Wei B. A visual long-short-term memory based integrated CNN model for fabric defect image classification. Neurocomputing. 2020;380:259–70. 10.1016/j.neucom.2019.10.067.Search in Google Scholar

(7) Ouyang WB, Xu BG, Hou J, Yuan XH. Fabric defect detection using activation layer embedded convolutional neural network. IEEE Access. 2019;7:70130–40. 10.1109/ACCESS.2019.2913620.Search in Google Scholar

(8) Huang YQ, Jing JF, Wang Z. Fabric defect segmentation method based on deep learning. IEEE T Instrum Meas. 2021;70:5005715. 10.1109/TIM.2020.3047190.Search in Google Scholar

(9) Chen MQ, Yu LJ, Zhi C, Sun RJ, Zhu SW, Gao ZY, et al. Improved faster R-CNN for fabric defect detection based on Gabor filter with genetic algorithm optimization. Comput Ind. 2022;134:103551. 10.1016/j.compind.2021.103551.Search in Google Scholar

(10) Ede JM. Deep learning in electron microscopy. Mach Learn Sci Technol. 2021;2(1):011004. 10.1088/2632-2153/abd614.Search in Google Scholar

(11) Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. 2019;6(1):60. 10.1186/s40537-019-0197-0.Search in Google Scholar

(12) Tong K, Wu Y, Zhou F. Recent advances in small object detection based on deep learning: A review. Image Vis Comput. 2020;97:1039. 10.1016/j.imavis.2020.103910.Search in Google Scholar

(13) Moshkov N, Mathe B, Kertesz-Farkas A, Hollandi R, Horvath P. Test-time augmentation for deep learning-based cell segmentation on microscopy images. Sci Rep. 2020;11(1):3327. 10.1038/s41598-021-81801-8.Search in Google Scholar PubMed PubMed Central

(14) Wan SH, Goudos S. Faster R-CNN for multi-class fruit detection using a robotic vision system. Comput Netw. 2019;168:107036. 10.1016/j.comnet.2019.107036.Search in Google Scholar

(15) Ogawa R, Kido T, Mochizuki T. Effect of augmented datasets on deep convolutional neural networks applied to chest radiographs. Clin Radiol. 2019;74(9):697–701. 10.1016/j.crad.2019.04.025.Search in Google Scholar PubMed

(16) Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated classification of lung cancer types from cytological images using deep convolutional neural networks. Biomed Res Int. 2017;4067832. 10.1155/2017/4067832.Search in Google Scholar PubMed PubMed Central

(17) Fu Y, Li XT, Ye YM. A multi-task learning model with adversarial data augmentation for classification of fine-grained images. Neurocomputing. 2020;377:122–9. 10.1016/j.neucom.2019.10.002.Search in Google Scholar

(18) Wu QF, Chen YP, Meng J. DCGAN-based data augmentation for tomato leaf disease identification. IEEE Access. 2020;8:98716–28. 10.1109/ACCESS.2020.2997001.Search in Google Scholar

(19) Dai XR, Yuan X, Wei XY. Data augmentation for thermal infrared object detection with cascade pyramid generative adversarial network. Appl Intell. 2022;52(1):967–81. 10.1007/s10489-021-02445-9.Search in Google Scholar

(20) Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Proceedings of the Conference and Workshop on Neural Information Processing Systems. Montreal, Canada: 2014. p. 2672–80.Search in Google Scholar

(21) Ma JY, Yu W, Liang PW, Li C, Jiang JJ. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf Fusion. 2019;48:11–26. 10.1016/j.inffus.2018.09.004.Search in Google Scholar

(22) Wang Y, Yu B, Wang L, Zu C, Lalush DS, Lin W, et al. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage. 2018;174:550–62. 10.1016/j.neuroimage.2018.03.045.Search in Google Scholar PubMed PubMed Central

(23) Rivenson Y, Wang HD, Wei ZS, de Haan K, Zhang Y, Wu Y, et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng. 2019;3(6):466–77. 10.1038/s41551-019-0362-y.Search in Google Scholar PubMed

(24) Hu GH, Huang JF, Wang QH, Li JR, Xu ZJ, Huang XB. Unsupervised fabric defect detection based on a deep convolutional generative adversarial network. Text Res J. 2019;90(3–4):247–70. 10.1177/0040517519862880.Search in Google Scholar

(25) Liu JH, Wang CY, Su H, Du B, Tao DC. Multistage GAN for fabric defect detection. IEEE T Image Process. 2019;29:3388–400. 10.1109/TIP.2019.2959741.Search in Google Scholar PubMed

(26) Li BW, Qi XJ, Lukasiewicz T, Torr PH. ManiGAN: Text-guided image manipulation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: 2020 June 16–18. p. 7877–86.10.1109/CVPR42600.2020.00790Search in Google Scholar

(27) Jing JF, Dong AM, Li PF, Zhang KB. Yarn-dyed fabric defect classification based on convolutional neural network. Opt Eng. 2017;56(9):093104. 10.1117/1.OE.56.9.093104.Search in Google Scholar

Received: 2022-06-08
Revised: 2022-07-28
Accepted: 2022-08-15
Published Online: 2022-09-15

© 2022 Yuming Zhang et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Research Articles
  2. The effect of isothermal crystallization on mechanical properties of poly(ethylene 2,5-furandicarboxylate)
  3. The effect of different structural designs on impact resistance to carbon fiber foam sandwich structures
  4. Hyper-crosslinked polymers with controlled multiscale porosity for effective removal of benzene from cigarette smoke
  5. The HDPE composites reinforced with waste hybrid PET/cotton fibers modified with the synthesized modifier
  6. Effect of polyurethane/polyvinyl alcohol coating on mechanical properties of polyester harness cord
  7. Fabrication of flexible conductive silk fibroin/polythiophene membrane and its properties
  8. Development, characterization, and in vitro evaluation of adhesive fibrous mat for mucosal propranolol delivery
  9. Fused deposition modeling of polypropylene-aluminium silicate dihydrate microcomposites
  10. Preparation of highly water-resistant wood adhesives using ECH as a crosslinking agent
  11. Chitosan-based antioxidant films incorporated with root extract of Aralia continentalis Kitagawa for active food packaging applications
  12. Molecular dynamics simulation of nonisothermal crystallization of a single polyethylene chain and short polyethylene chains based on OPLS force field
  13. Synthesis and properties of polyurethane acrylate oligomer based on polycaprolactone diol
  14. Preparation and electroactuation of water-based polyurethane-based polyaniline conductive composites
  15. Rapeseed oil gallate-amide-urethane coating material: Synthesis and evaluation of coating properties
  16. Synthesis and properties of tetrazole-containing polyelectrolytes based on chitosan, starch, and arabinogalactan
  17. Preparation and properties of natural rubber composite with CoFe2O4-immobilized biomass carbon
  18. A lightweight polyurethane-carbon microsphere composite foam for electromagnetic shielding
  19. Effects of chitosan and Tween 80 addition on the properties of nanofiber mat through the electrospinning
  20. Effects of grafting and long-chain branching structures on rheological behavior, crystallization properties, foaming performance, and mechanical properties of polyamide 6
  21. Study on the interfacial interaction between ammonium perchlorate and hydroxyl-terminated polybutadiene in solid propellants by molecular dynamics simulation
  22. Study on the self-assembly of aromatic antimicrobial peptides based on different PAF26 peptide sequences
  23. Effects of high polyamic acid content and curing process on properties of epoxy resins
  24. Experiment and analysis of mechanical properties of carbon fiber composite laminates under impact compression
  25. A machine learning investigation of low-density polylactide batch foams
  26. A comparison study of hyaluronic acid hydrogel exquisite micropatterns with photolithography and light-cured inkjet printing methods
  27. Multifunctional nanoparticles for targeted delivery of apoptin plasmid in cancer treatment
  28. Thermal stability, mechanical, and optical properties of novel RTV silicone rubbers using octa(dimethylethoxysiloxy)-POSS as a cross-linker
  29. Preparation and applications of hydrophilic quaternary ammonium salt type polymeric antistatic agents
  30. Coefficient of thermal expansion and mechanical properties of modified fiber-reinforced boron phenolic composites
  31. Synergistic effects of PEG middle-blocks and talcum on crystallizability and thermomechanical properties of flexible PLLA-b-PEG-b-PLLA bioplastic
  32. A poly(amidoxime)-modified MOF macroporous membrane for high-efficient uranium extraction from seawater
  33. Simultaneously enhance the fire safety and mechanical properties of PLA by incorporating a cyclophosphazene-based flame retardant
  34. Fabrication of two multifunctional phosphorus–nitrogen flame retardants toward improving the fire safety of epoxy resin
  35. The role of natural rubber endogenous proteins in promoting the formation of vulcanization networks
  36. The impact of viscoelastic nanofluids on the oil droplet remobilization in porous media: An experimental approach
  37. A wood-mimetic porous MXene/gelatin hydrogel for electric field/sunlight bi-enhanced uranium adsorption
  38. Fabrication of functional polyester fibers by sputter deposition with stainless steel
  39. Facile synthesis of core–shell structured magnetic Fe3O4@SiO2@Au molecularly imprinted polymers for high effective extraction and determination of 4-methylmethcathinone in human urine samples
  40. Interfacial structure and properties of isotactic polybutene-1/polyethylene blends
  41. Toward long-live ceramic on ceramic hip joints: In vitro investigation of squeaking of coated hip joint with layer-by-layer reinforced PVA coatings
  42. Effect of post-compaction heating on characteristics of microcrystalline cellulose compacts
  43. Polyurethane-based retanning agents with antimicrobial properties
  44. Preparation of polyamide 12 powder for additive manufacturing applications via thermally induced phase separation
  45. Polyvinyl alcohol/gum Arabic hydrogel preparation and cytotoxicity for wound healing improvement
  46. Synthesis and properties of PI composite films using carbon quantum dots as fillers
  47. Effect of phenyltrimethoxysilane coupling agent (A153) on simultaneously improving mechanical, electrical, and processing properties of ultra-high-filled polypropylene composites
  48. High-temperature behavior of silicone rubber composite with boron oxide/calcium silicate
  49. Lipid nanodiscs of poly(styrene-alt-maleic acid) to enhance plant antioxidant extraction
  50. Study on composting and seawater degradation properties of diethylene glycol-modified poly(butylene succinate) copolyesters
  51. A ternary hybrid nucleating agent for isotropic polypropylene: Preparation, characterization, and application
  52. Facile synthesis of a triazine-based porous organic polymer containing thiophene units for effective loading and releasing of temozolomide
  53. Preparation and performance of retention and drainage aid made of cationic spherical polyelectrolyte brushes
  54. Preparation and properties of nano-TiO2-modified photosensitive materials for 3D printing
  55. Mechanical properties and thermal analysis of graphene nanoplatelets reinforced polyimine composites
  56. Preparation and in vitro biocompatibility of PBAT and chitosan composites for novel biodegradable cardiac occluders
  57. Fabrication of biodegradable nanofibers via melt extrusion of immiscible blends
  58. Epoxy/melamine polyphosphate modified silicon carbide composites: Thermal conductivity and flame retardancy analyses
  59. Effect of dispersibility of graphene nanoplatelets on the properties of natural rubber latex composites using sodium dodecyl sulfate
  60. Preparation of PEEK-NH2/graphene network structured nanocomposites with high electrical conductivity
  61. Preparation and evaluation of high-performance modified alkyd resins based on 1,3,5-tris-(2-hydroxyethyl)cyanuric acid and study of their anticorrosive properties for surface coating applications
  62. A novel defect generation model based on two-stage GAN
  63. Thermally conductive h-BN/EHTPB/epoxy composites with enhanced toughness for on-board traction transformers
  64. Conformations and dynamic behaviors of confined wormlike chains in a pressure-driven flow
  65. Mechanical properties of epoxy resin toughened with cornstarch
  66. Optoelectronic investigation and spectroscopic characteristics of polyamide-66 polymer
  67. Novel bridged polysilsesquioxane aerogels with great mechanical properties and hydrophobicity
  68. Zeolitic imidazolate frameworks dispersed in waterborne epoxy resin to improve the anticorrosion performance of the coatings
  69. Fabrication of silver ions aramid fibers and polyethylene composites with excellent antibacterial and mechanical properties
  70. Thermal stability and optical properties of radiation-induced grafting of methyl methacrylate onto low-density polyethylene in a solvent system containing pyridine
  71. Preparation and permeation recognition mechanism of Cr(vi) ion-imprinted composite membranes
  72. Oxidized hyaluronic acid/adipic acid dihydrazide hydrogel as cell microcarriers for tissue regeneration applications
  73. Study of the phase-transition behavior of (AB)3 type star polystyrene-block-poly(n-butylacrylate) copolymers by the combination of rheology and SAXS
  74. A new insight into the reaction mechanism in preparation of poly(phenylene sulfide)
  75. Modified kaolin hydrogel for Cu2+ adsorption
  76. Thyme/garlic essential oils loaded chitosan–alginate nanocomposite: Characterization and antibacterial activities
  77. Thermal and mechanical properties of poly(lactic acid)/poly(butylene adipate-co-terephthalate)/calcium carbonate composite with single continuous morphology
  78. Review Articles
  79. The use of chitosan as a skin-regeneration agent in burns injuries: A review
  80. State of the art of geopolymers: A review
  81. Mechanical, thermal, and tribological characterization of bio-polymeric composites: A comprehensive review
  82. The influence of ionic liquid pretreatment on the physicomechanical properties of polymer biocomposites: A mini-review
  83. Influence of filler material on properties of fiber-reinforced polymer composites: A review
  84. Rapid Communications
  85. Pressure-induced flow processing behind the superior mechanical properties and heat-resistance performance of poly(butylene succinate)
  86. RAFT polymerization-induced self-assembly of semifluorinated liquid-crystalline block copolymers
  87. RAFT polymerization-induced self-assembly of poly(ionic liquids) in ethanol
  88. Topical Issue: Recent advances in smart polymers and their composites: Fundamentals and applications (Guest Editors: Shaohua Jiang and Chunxin Ma)
  89. Fabrication of PANI-modified PVDF nanofibrous yarn for pH sensor
  90. Shape memory polymer/graphene nanocomposites: State-of-the-art
  91. Recent advances in dynamic covalent bond-based shape memory polymers
  92. Construction of esterase-responsive hyperbranched polyprodrug micelles and their antitumor activity in vitro
  93. Regenerable bacterial killing–releasing ultrathin smart hydrogel surfaces modified with zwitterionic polymer brushes
Downloaded on 9.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/epoly-2022-0071/html
Scroll to top button