0% found this document useful (0 votes)
2 views20 pages

Fabric Defect Detection in Image Using Labview Based Multiclass Classification Approach

The document presents a Texture Defect Detection (TDD) algorithm for automatic fabric defect detection in textile images using a LabVIEW-based multiclass classification approach. The algorithm utilizes image pre-processing, wavelet decomposition, and statistical feature extraction to classify fabric defects with a high accuracy rate of 96.56% on a dataset of 2800 defective and 400 non-defective images. This automated system aims to enhance quality control in the textile industry by reducing reliance on manual inspection, thereby minimizing human error and improving efficiency.

Uploaded by

tinonturja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views20 pages

Fabric Defect Detection in Image Using Labview Based Multiclass Classification Approach

The document presents a Texture Defect Detection (TDD) algorithm for automatic fabric defect detection in textile images using a LabVIEW-based multiclass classification approach. The algorithm utilizes image pre-processing, wavelet decomposition, and statistical feature extraction to classify fabric defects with a high accuracy rate of 96.56% on a dataset of 2800 defective and 400 non-defective images. This automated system aims to enhance quality control in the textile industry by reducing reliance on manual inspection, thereby minimizing human error and improving efficiency.

Uploaded by

tinonturja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Multimedia Tools and Applications (2024) 83:65753–65772

https://doi.org/10.1007/s11042-023-18087-7

Automatic fabric defect detection in textile images using


a labview based multiclass classification approach

T. Meeradevi1 · S. Sasikala1

Received: 24 January 2023 / Revised: 30 November 2023 / Accepted: 29 December 2023 /


Published online: 19 January 2024
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024

Abstract
Nowadays the detection of fabric defects is an active research topic to detect and resolve
the difficulties faced in processing fabric in printing and knitting in textile industries. The
traditional approach of visual screening of human fabrics is exceedingly time consum-
ing and it is not reliable as it is much susceptible to human errors. There are two major
issues in defect inspection like defect identification and classification in fabric. Automatic
identification of defects is quite important in the current scenario. For enhancing the qual-
ity of the fabric, this paper proposes a Texture Defect Detection (TDD) algorithm. This
TDD algorithm utilizes pre-processing for the extraction of luminance plane and Discrete
wavelet frame decomposition for dividing the image into several subbands with same reso-
lution as input image. Statistical features are extracted using Gray Level Co-occurrence
Matrix and these features are applied to Support Vector Machine for classifying the defec-
tive images. This improves the quality of texture segmentation and classification of visible
defects. The experimental setup is done with the fabric conveyor and three high resolution
industrial cameras acA4600-7gc for covering the entire width of the fabric while running.
This TDD algorithm is developed under LabVIEW platform. Textile Texture Database
(TILDA) multi-class dataset is used for testing the proposed algorithm. This algorithm is
tested for 4 different classes of fabric defects including 2800 defective and 400 non defec-
tive fabric images. The success rate of detection of fabric defect is 96.56% with the images
from the database. The validation results with real time fabric images show 97% of accu-
racy in the detection of defects in fabric images.

Keywords Fabric defects,Texture defect detection · TILDA · Classifiers · LabVIEW

1 Introduction

The textile industry is the most important source of income and expenditure for many
countries because of the basis for many consumables such as clothing, bags, furniture, and
covers. Quality management in the textile industries is of vital importance because it can

* T. Meeradevi
[email protected]
1
Electronics and Communication Engineering, Kongu Engineering College, Perundurai, India

13
Vol.:(0123456789)
65754 Multimedia Tools and Applications (2024) 83:65753–65772

avoid significant financial losses. The traditional quality control processes depend mainly
on human intervention, but manual defect labeling systems detect 45–65 percent of fab-
ric defects. Therefore, automated defect detection systems are vital to reduce costs and to
speed up the quality control process [1].
Detection of fabric defects in textile manufacturing industries is much essential. The
traditional approach of quality control uses manual examination. This has to be done at
the front of the machine, monitoring the fabric continually and resolving the issues during
detection. Although the quality control inspector is properly qualified and experienced, the
detection is likely to be limited in accuracy, consistency and efficiency in spotting the prob-
lems. The quality control inspector can be tired or bored, and hence uncertain and partial
outcomes can be expected. In automated inspection, the challenge of detecting minor faults
that locally breaks the homogeneity of texture patterns is resolved and various kinds of
defects are classified. Different strategies for fabric defect inspection have been proposed,
and such strategies are based on statistics, spectral, model learning, structural and hybrid
approaches. Both frequency and spatial information is needed for fabric defect detection.
Frequency information is required for fabric image recognition, and spatial information is
needed for identifying the position of defect [1].
Quality assurance (QA) in manufacturing is vital to ensure products meet standards, yet
manual QA processes are expensive and slow. Artificial Intelligence (AI) offers an appeal-
ing solution for automation and expert assistance. Convolutional Neural Networks (CNNs)
are increasingly popular for visual inspection tasks. These networks excel in analyzing vis-
ual data, making them valuable for QA in manufacturing.
Explainable Artificial Intelligence (XAI) systems complement AI methods by providing
transparency and interpretability. They offer insights into how AI makes decisions, crucial
for quality inspections in manufacturing. XAI systems aid in understanding AI’s decision-
making processes, ensuring confidence in automated inspections while maintaining trans-
parency and interpretability, vital for QA in manufacturing.
Santhosh et al. (2020) [2] have described the ways to use a CNN to identify fabric
defects. The repeated texture in the fabric is calculated by the autocorrelation value of a
fabric image [2].
Meenakshi Garg et al. (2020) [3], H. Li et al. (2020) [4] have proposed the CNN based
fabric defect detection. Deep learning approach is proposed for the detection of fabric
defects using CNN. Here minimization of the mean square error was used because of its
best performance.
Subrata Das et al. (2020) have developed an artificial feed forward neural network based
defect detection. The system consists of two parts: image feature vectors and performance
assessment of the attributes using a neural classifier to find faults [5].
Chetan Chaudhari et al. (2020) [6] have proposed the use of wavelet transform for defect
detection. The wavelet transform is applied on the input image to extract the approximate
sub-image of appropriate level. The energy of the sub images is calculated using Parseval’s
theorem. The energy that deviates from the threshold value contains defects in the corre-
sponding input image.
Chang et al. (2018) have presented a templatebased correction technique for the identifi-
cation of fault. A fabric image is split into arrays by regularity and the impact of mismatch
between arrays is also minimized. Non-defect image arrays are selected for a consistent
reference to an average template [7].
Gharsallah et al. (2020) [8], have developed a fabric defect detection method based on
the filtering technique. A filter combined with image features is proposed and the threshold
value is used to isolate the defects.

13
Multimedia Tools and Applications (2024) 83:65753–65772 65755

A new method based on deep fusion and non-convex called regulated Robust Princi-
pal Component Analysis (RPCA) is proposed by Yan Dong et al. (2020) to detect fabric
defects. The study extracts deep multilevel functions to distinguish between complicated
and diversified textile defects and the RPCA separates the defects from the backdrop. To
improve the outcome of detection, a new RPCA based fusion approach is implemented [9].
In the same way, a fault detection method has been developed by Zhang et al. (2020)
by using the saliency analysis of the Local Steering Kernel (LSK). The study converts
an RGB image of the fabric into the colour space of Commission International Eclairage
(CIE) L*a*b and then calculates LSK for the singular decomposition value in each colour
channel. The cosine matrix correspondence of the desired defective maps is used for meas-
uring similarity among different LSK features. Finally, a multi-scale average fusion system
is applied to integrate the defective maps in the final defective map in different scales [10].
Khowaja et al. (2019) have proposed a fabric defect detection using histogram tech-
niques. In this, the defective fabric image is converted into a grayscale image and a well-
defined threshold function is utilized to find the fault from histogram [11].
The fabric defect detection problem is resolved under complex lighting conditions by
Huang Wang et al. (2020). Recurrent Attention Model (RAM) is used to isolate defects,
which is insensible to light and noise differences [12].
Liu et al., (2021) The study presents an upgraded YOLOv4 algorithm for fabric defect
detection, integrating a novel SoftPool-based SPP structure that enhances accuracy by 6%
while incurring only a 2% decrease in FPS. Employing contrast-limited adaptive histogram
equalization for image enhancement fortifies the model against interference, demonstrating
improved defect localization precision and speed. [24]. Jia et al. (2022) An advanced fabric
defect detection system is introduced, employing transfer learning and an enhanced Faster
R-CNN. By utilizing pre-trained Imagenet weights, integrating ResNet50 and ROI Align,
and combining RPN with FPN, the system significantly improves detection accuracy, con-
vergence, and identification of small target defects. The cascaded modules and varied IoU
thresholds further enhance sample distinction, demonstrating superior performance com-
pared to existing models and offering valuable insights for future fabric defect detection
methodologies.. The study aims to fine-tune pre-trained models using adaptive learning
techniques for improved accuracy in identifying different types of defects [25]. Alireza
Saberironaghi et al., (2023) reviewed various deep learning techniques for defect detection
for industrial products. Deep learning-based detection of surface defects on industrial prod-
ucts is discussed from three perspectives: supervised, semi-supervised, and unsupervised.
Also, the common challenges and its solution for defect detection with respect to real time
images were discussed [26].
In this paper, by considering the need for textile industries at Tirupur and Erode dis-
tricts, this TDD algorithm is proposed. This camera setup may be fixed on the top of the
fabric conveyor and may be connected with the proposed system, so that this fabric defect
detection process will be automated and it will reduce the manpower and the error rate.
This TDD algorithm detects defects in a texture based on a texture classifier trained with
texture samples without any defect. During inspection, the algorithm identifies the defec-
tive regions that do not match with the trained defect-free texture samples. The defects
identified appear in the output image as blobs. The particle analysis tools in the National
Instruments Vision library are utilized to analyze the properties of the defects detected.
The other sections of the paper are organized as follows. Section 2 details the proposed
modelwith TDD algorithm. Section 3 describes the experimental setup of the proposed
algorithm. Section 4 the results are analyzed with TILDA dataset and experimental dataset
and Section 5 concludes the paper withfuture scope.

13
65756 Multimedia Tools and Applications (2024) 83:65753–65772

2 Proposed methodology

In general, the running fabric with may be around 1.5 m to 1.7 m. Each camera cover
0.65 m width of the fabric. So in our experimental setup three cameras are fixed to cover
the entire width of the fabric with 0.1 m overlapping. These three real time images are sent
to the proposed TDD algorithm which is given in Fig. 1.

2.1 TDD algorithm

The step-by-step procedure of the TDD algorithm is as follows.

Step 1: Acquire Image


Fabric images are acquired using a real time camera with the resolution of 4608 × 3288
or from TILDA dataset (768 × 512).
Step 2: Pre-Process the Image
The TDD algorithm processes the gray scale image so that if the input image is a colour
image, then its single plane (Red plane or Green plane or Blue plane) will be extracted.
Step 3: Locating Fault Area
The geometric pattern matching algorithm finds Region of Interest to find the fault area.

Fig. 1  Texture Defect Detection Algorithm

13
Multimedia Tools and Applications (2024) 83:65753–65772 65757

Step 4: Discrete Wavelet Frame Decomposition.

• The defective image is divided into several subbands using wavelet frames [13]
• The subband image is of the same resolution as the input image and this improvesthe
texture classification and the segmentation capability

Step 5: Statistical Feature Extraction

• TDD algorithm considers second order statistics for feature extraction


• The gray-level co-occurrence matrix (GLCM) is used to create the second-order statis-
tical parameters
• The algorithm divides each subband image into non-overlapping windows and evalu-
ates the coefficient distribution of each window using GLCM [I(x, y)]
• The GLCM calculates the probability of a pixel value that occurs at a distance vector d�⃗
from another pixel value
• A texture image I(x, y) is an N × M matrix that consists of G different grey shades
andthe displacement vector d�⃗ = (dx, dy) is a G × G matrixas given in the Eq. (1):
∑N ∑M { }
P⃗d (i, j) = 𝜕 I(x, y) = iI (x+dx ,y+dy ) = j (1)
x=1 y=1

where δ{true} = 1 and δ{false} = 0.


• The number in the element (i, j) of the GLCM matrix Pd(i, j) indicates the number of
times the pixel level i occurs at the displacement vector d�⃗ from pixel level j.
• The TDD extracts the five Haralick features such as entropy, dissimilarity, contrast,
homogeneity and correlation from the GLCM calculated at each partition of the sub-
band texture given in Eq. (2) to (6) [25].
∑ ∑G
Entropy = G i=1 j=1 Pi,j (−lnPi,j ) (2)

Dissimilarity = ∑G ∑G
P �i − j�2
i=1 j=1 i,j (3)

Contrast = ∑G ∑G
P (i − j)2
i=1 j=1 i,j (4)

Pi,j
Homogeneity = ∑G ∑G
i=1 j=1 (5)
1 + (i − j)2

⎡� � ⎤
∑G ∑G ⎢ i − 𝜇i (j − 𝜇j ) ⎥
Correlation = i=1 j=1 i,j ⎢
P � ⎥ (6)
⎢ σ2i σ2j ⎥
⎣ ⎦

Where,
μi ∑GiP and μj = ∑GjP are the mean values of GLCM.

13
65758 Multimedia Tools and Applications (2024) 83:65753–65772

σi  ∑GP(1 – μi)2 and σj = ∑GP(1 – μj)2 are the variances of GLCM [14].

Step 6: Support Vector Machine Classifier

• SVM classifiers find a separating or a hyperplane surface that is positioned as far as


it is feasible in any one ofthe twoclassesfromthe closestdatapoint
• The classifier considers the spatial distribution information for each sample to
determine whether the sample belongs to the known class or not
• The objective of training is to reduce the error function:

1 ∑
l
min 1
= WT W − 𝜌 + 𝜉 (7)
w, b, 𝜉 2 vl i=1 i

Subject to W TK(Xi) ≥ ρ – ξi; ξi ≥ 0, i = 1... l;ρ ≥ 0.


where,
W is the hyperplane’s normal vector to the origin, v(nu) is the parameter for the upper
bound and lower bound of error and vector classesand ξ is the slack variable [15].

Step 7: If there is no defect in the texture image, go to Step 10


Step 8: If there is a defect in the texture image, segment the defect in the texture
image
Step 9: Indicate the defect area in the texture image and go to Step 10
Step 10: Continue the same process with the next texture image

2.2 Dataset

This proposed algorithm is initially tested with the TILDA dataset which is a standard
dataset for fabric defects. After validating this algorithm, it tested with the real time
captured data. In TILDA dataset four primary classes (c1-c4) are given and are aligned
according to the surface structure. For every class, two representative (r1 & r2 or r2 &
r3 or r1 & r3) subgroups are included. The details of this dataset’s main class are given
in Table 1. Each subgroup contains 50 faultless images (e0) and seven error classes
(e1-e7). The images are in Tag Image File Format (TIFF) and have a dimension of
768 × 512 pixels. For each image, an accompanying text with a brief description of the
Error (location and size) is provided in the dataset itself. The TILDA dataset contains
a total of 3,200 images, 2,800 text reads with error descriptions and a data volume of
1.2 Giga Bytes [16]. Figure 2 depicts the directory structure of the dataset. Table 2
describes each sub-error class and the type of fault in the mainclass. The images in the
dataset are designated to the CREN.tif convention and are described in Table 3.
For example, the image in the TILDA dataset c3r1e0n40.tif represents the class 3
(c3) with the representative subgroup 1 (r1), the error class 0 (e0) and the 40th image
[16].
In this paper, totally, 3200 images are used for training and testing purposes with
the error class from e0–e7 from the TILDA dataset which is described in Table 4. The
sample images from the four main classes of the TILDA dataset are provided in Fig. 3.

13
Table 1  Description of the main class and the type of fabrics [16]
Type of class Type of fabric Example

Class 1(c1) Internal structure that is either very fine or not is visible Silk or viscose that has not been printed
Class 2(c2) Fabrics with a stochastic structure of low variance and the surface do not contain Wool or jute
Multimedia Tools and Applications (2024) 83:65753–65772

any imprint
Class 3(c3) Fabrics with periodic pattern Curtains or materials that have a diamond pattern
printed on them
Class 4(c4) Fabrics printed with no discernible patterns Flowers of various sizes are imprinted on viscose
65759

13
65760 Multimedia Tools and Applications (2024) 83:65753–65772

Fig. 2  Directory structure of the TILDA dataset

Table 2  Description of the error class and the type of fault Zhang et.al.,(2015)
Error Class Type of fault

e0 No defects in the fabric


e1 Mechanical grievance causes holes and cuts in the material
e2 Oil corners and color defects
e3 Thread error, compaction of threads (without mechani-
cally caused cracks). absence of individual threads in the
fabric
e4 Foreign bodies on the tissue (called flight)
e5 Wrinkles in the tissue (without mechanical damage)
e6 Lighting has been changed
e7 A distortion from the tilting of the camera and changes in
the distance from the camera to the test

Table 3  Conventional symbols Conventional symbol Explanation

C Indicates the main class number


R Indicatesthe representative sub-
group of the class number
E Indicatesthe error class number
N Indicates the image number

13
Multimedia Tools and Applications (2024) 83:65753–65772 65761

Table 4  Number of images in Main Class Representative Error Class Number of images
TILDA data Zhang et. al., (2015) Subgroups

c1 r1 e0-e7 8 × 50 = 400
r3 e0-e7 8 × 50 = 400
c2 r2 e0-e7 8 × 50 = 400
r3 e0-e7 8 × 50 = 400
c3 r1 e0-e7 8 × 50 = 400
r3 e0-e7 8 × 50 = 400
c4 r1 e0-e7 8 × 50 = 400
r3 e0-e7 8 × 50 = 400
Total 3200

(a) c1r1e5n22.tif

(b) c2r2e4n27.tif

(c) c3r1e4n4.tif

(d) c4r1e5n29.tif

Fig. 3  Sample images from c1, c2, c3 and c4 classes with the corresponding image name given in TILDA
data set. (a) image with a single fold across the fabric. (b) a medium-sized piece of paper on the top of the
fabric, (c) image with single thread across the fabric and (d) image with a twofold across the fabric

13
65762 Multimedia Tools and Applications (2024) 83:65753–65772

3 Experimental setup

The experimental setup is shown in Fig. 4 for the proposed detection of fabric defect
and shade variation using TDD algorithm. Experiments are conducted using LabVIEW
software, an Industrial Controller 3173—1P20 with the Intel Core i7-5650U @ 2.2 GHz
and Xilinx Kintex-7 XC7K160T Processor. Three acA4600-7gc Basler cameras arecon-
nected to capture the image with the resolution of 4608 × 3288. They are connected to
the controller using Ethernet cable and powered by a basler power cable.

3.1 Working of the experimental setup

The working table consists of a motor with roller which moves the fabric continuously
with a constant speed on the working table. The images of the fabric are captured by the
three Basler Industrial cameras with 7 frames per second. The three cameras cover the
whole breath (1.5 m) of the fabric. Then the images captured are fed continuously to the
industrial controller which processes them using the TDD algorithm. The defects of the
fabric detected are continuously displayed in the monitor.

Fig. 4  Experimental setup for the detection of faults in fabrics

13
Multimedia Tools and Applications (2024) 83:65753–65772 65763

4 Results and discussion

4.1 Results for TILDA dataset

The confusion matrix is calculated by applying the TDD algorithm on TILDA dataset.
The performance parameters utilized are:

Sensitivity It is the accurate determination of bad samples. It is also known as a recall it is


represented in Eq. (8).
TP
Sensitivity (SE ) = (8)
TP + FN
Specificity It is the proper detection of non-defect samples as shown in Eq. (9):
TN
Specificity (SP ) = (9)
TN + FP
Detection Success Rate (DSR) It demonstrates how the model anticipates proper outputs
on a consistent basis. It is referred to as Detection Accuracy (DA) and can be calculated
using the Eq. (10):
Total No. of valid prediction by the classifier
Detection Accuracy (DA) = DSR =
Total No. of prediction by the classifier

TN + TP
DSR = (10)
TN + FP + FN + TP
False Alarm Rate (FAR) It is likely that a false alarm will be raised if the true value is
negative.It can be calculated using Eq. (11):
No. of samples detected to be defective
False Alarm Rate (FAR) =
No. of defect-free image samples

FP
FAR = (11)
FP + FN
Detection Rate (DR) This is a result of the model that predicts the positive class accu-
rately. Equation (12) can be used to compute DR:
No. of defective image samples detected correctly
Detection Rate (DR) =
No. of defective image samples

TN
DR = (12)
TP + TN
Positive Predictive Value (PPV) or Precision It gives the number of correct outputs out
of all the correctly predicted positive values by using the TDD algorithm.It determines
whether analgorithm is reliable or not.For calculating the precision, Eq. (13) is utilized
[17].

13
65764 Multimedia Tools and Applications (2024) 83:65753–65772

TP
Positive Predictive Value (PPV) or Precision = (13)
TP + FP
Totally, 50 images from the error free e0 of c1 and r1 class and 350 images from the
error class e1-e7 from the same c1 and r1 class are testedand the confusion matrixis
obtained. The same process is repeated for the remaining classes c2-c4 and the confusion
matrix is given in Table 5.
The validation results of c1 are shown in Fig. 5 and that of c3 are shown in the Fig. 6.
The Figs. 5 and 6 shows the incorrect results as 18/400 and 9/400 respectively. The param-
eters are evaluated using Eqs. (8) to (13) and shown in Table 5 for the TILDA dataset.
Furthermore, Table 6 depicts the highest accuracy scores of 95.50%, 96.25%, 97.75%
and 98% for c1, c2, c3 and c4 texture class respectively based on TILDA dataset using the
proposed algorithm with SVM classifier. The average accuracy of 96.65% is achieved by
using the algorithm. The defect detected image with the source images of c1 and c2 classes
are shown in Fig. 7. The defect detected is covered with the red outline.
The TDD algorithm works under LabVIEW software environment which is a new plat-
form utilized for detection of the fabric defect. The comparison of the existing defect detec-
tion techniques with the proposed model is given in Table 7. In most of these studies, self-
made datasets have been employed, but TILDA, which has a public and extensive database,
has been the most regularly used. Table 7 compares the accuracy rates of the proposed
model with the TILDA based investigations. Based on the volume of classes and images,
it is observed from the results that the proposed algorithm is better than the existing ones.
In addition, the proposed approach clearly appears to be the most comprehensive study
using all classes and images in the TILDA dataset. In addition to all these, Zhang et al.
(2015), Salem and Nasri (2011), Salem and Abdelkrim (2020), and Deotale and Sarode

Table 5  Confusion matrix for Main Class Representative Number of Error Confusion
TILDA dataset Subgroups Class Images Matrix for 400
images
[ ]
c1 r1 e0 = 50 333 1
e1-e7 = 350 17 49
[ ]
r3 e0 = 50 343 12
e1-e7 = 350 7 38
[ ]
c2 r2 e0 = 50 344 10
e1-e7 = 350 6 40
[ ]
r3 e0 = 50 342 7
e1-e7 = 350 8 43
[ ]
c3 r1 e0 = 50 349 8
e1-e7 = 350 1 42
[ ]
r3 e0 = 50 344 9
e1-e7 = 350 6 41
[ ]
c4 r1 e0 = 50 342 2
e1-e7 = 350 8 48
[ ]
r3 e0 = 50 345 3
e1-e7 = 350 5 47

13
Multimedia Tools and Applications (2024) 83:65753–65772 65765

Fig. 5  Validation results of class c1 with subgroup r1

Fig. 6  Validation results of class c3 with subgroup r1

(2019) from previous studies are used traditional feature extraction such as (GLCM, LBP,
etc.) and classifier (SVM, Neural Network, etc.) methods for detection of texture defect.
The highest accuracy among these studies is achieved at 97.6% by Zhang et al. On the other
hand, Jeyaraj and Nadar (2019), Jeyaraj and Nadar (2020), and Jing et al. (2019) are used
pre-trained deep models such as AlexNet, ResNet512, and AlexNet, respectively. The highest
accuracy among these studies is achieved at 98.5% by Jeyaraj and Nadar (2020)). Accord-
ing to all these results, it has been determined that TDD algorithm are more successful than
conventional methods, considering the number of classes and images. In each of these other
studies that used the TILDA dataset, a certain number of classes and images are used rather
than opting to use all classes and texture images. However, the proposed model, which is
tested using all classes and images from the TILDA dataset, and achieved a superior level of
success when compared to the existing studies based on fewer classes and images.

13
65766 Multimedia Tools and Applications (2024) 83:65753–65772

Table 6  Performance parameters Main Representative SE SP DA FAR DR PPV


for TDD algorithm Class Subgroups

c1 r1 0.87 0.74 0.9550 0.2576 0.13 0.95


r3 0.90 0.84 0.9525 0.1556 0.10 0.98
c2 r2 0.90 0.87 0.9600 0.1304 0.10 0.98
r3 0.89 0.84 0.9625 0.1569 0.11 0.98
c3 r1 0.89 0.98 0.9775 0.0233 0.11 1.00
r3 0.89 0.87 0.9625 0.1277 0.11 0.98
c4 r1 0.88 0.86 0.9750 0.1429 0.12 0.98
r3 0.88 0.90 0.9800 0.0962 0.12 0.99
Average Value 0.89 0.86 0.9656 0.1363 0.11 0.98

Fig. 7  Defect detected image with the source image of Class c1 and c2 (a) the source image c1r1e4n11.tif
from Class c1 (b) the defect detected c1r1e4n11.tif image with the defect marked in red line (c) the source
image c2r2e1n18.tif from Class c2 and (d) the defect detected c2r2e1n18.tif image with the defect marked
in red

13
Table 7  Comparison of the results of proposed modelwith the previous models
References Methods Data Set Number of error class and Images Accuracy Score

Jing et al., 2019 [18] Fine-tuned AlexNet C1-R1 6 error class and Images 97.2%
Zhang et al., 2015 [19] LBP, GLCM and Neural Network C1-R1 and C1-R2 6 error class and 600 Images 97.6%
Salem & Nasri, 2009 [20] LBP, GLCM and SVM - 7 error class and 480 Images 86.7%
Salem & Abdelkrim, 2020 [21] GLCM, LBP, LPQ, and SVM - 5 error class and 480 Images 97.25%
Multimedia Tools and Applications (2024) 83:65753–65772

Jeyaraj & Nadar, 2020 [22] ResNet512 based CNN features, Kullback Leibler C1-R2, C2-R2 6 error class 98.5% and 96.5%
Divergence (KLD) and Markov Random Field
(MRF)
Jeyaraj & Nadar, 2019 [23] AlexNet based Multi-scaling deep CNN - 6 error class and 1850 images 96.55%
Deotale & Sarode, 2019 [24] GLCM, Gabor Wavelet, and Random Decision Forest - 6 error class 84.5%
Proposed Model TDD Algorithm C1 (R1, R3), C2 (R2, R3) 8 error class and 3200 image 96.56%
C3 (R1, R3), and C4 (R1,
R3)
65767

13
65768 Multimedia Tools and Applications (2024) 83:65753–65772

4.2 Results of experimental setup

The validated TDD algorithm with TILDA dataset is applied to the real experiment and
the performance is reported. The working model of the fabric defect detection with the
industrial setup and the controller assembly is shown in Fig. 8.
Totally, 40 error free textile images and 60 error images are tested in real time and
confusion matrix is plotted and shown in Table 8. The average accuracy of 97% is
achieved using the proposed TDD algorithm.
A sample defect identified image is shown in Fig. 9. The inspection interface of the
LabVIEW environment contains three main areas: Results panel,Inspection Statistics
panel andDisplay window.
The shade variation value and the maximum defect detection area are 69 & 1.6 mm^2
respectively. Results panel list the steps in the inspection by name. For each step in the
inspection, it displays the step type and results (PASS or FAIL). Likewise, the status
of PASS in shade variation should have the histogram value of 99 to 100. The status of
PASS in the maximum defect detection area should have the value of 0.0 mm^2. For
Fig. 9(a) it is PASS and for Figs. 9(b) & 9(c) it is FAIL. The display window displays
the image under inspection and the status of the defect. For Figs. 9 a,b and c the defected
fabric images with defect indicated in red outline are displayed. The shade variation
value and the maximum defect detection area values are also displayed. Inspection sta-
tistics panel contains the processing time of the inspection. From Figs. 9 (a, b and c),
it is 0.24 ms, 0.23 ms, and 0.23 ms respectively. So, the average time of inspection is
0.23 ms for detecting the defect. Thus, the TDD algorithm with LabVIEW environment
efficiently detects the defects in the fabric.

Fig. 8  Working model for fabric


defect detection with industrial a b
setup (a) Industrial Controller
3173—1P20 (b) acA4600-7gc
Basler camera and (c) Fabric
Conveyor

13
Table 8  Confusion matrix for the real time fabric defect detection
Name of the defect Confusion Matrix SE SP DA FAR DR PPV

Holes and cuts 58 1 0.60 0.95 0.9700 0.0488 0.40 0.97


[ ]
2 39
Oil corners and color defects 57 1 0.59 0.93 0.9600 0.0714 0.41 0.95
[ ]
3 39
Thread error and absence of individual threads in the fabric 58 1 0.60 0.95 0.9700 0.0488 0.40 0.97
[ ]
2 39

59 2
Multimedia Tools and Applications (2024) 83:65753–65772

Foreign bodies on the fabric (called flight) 0.61 0.97 0.9700 0.0256 0.39 0.98
[ ]
1 38
Wrinkles on the fabric (without mechanical damage) 59 2 38.00 0.97 0.9700 0.0256 0.39 0.98
[ ]
1 38
Changed lighting conditions 58 1 0.60 0.95 0.9700 0.0488 0.40 0.97
[ ]
2 39
A distortion from the tilting of the camera 59 1 0.60 0.98 0.9800 0.0250 0.40 0.98
[ ]
1 39
Average Value 0.60 0.96 0.9700 0.0420 0.40 0.97
65769

13
65770 Multimedia Tools and Applications (2024) 83:65753–65772

1 2

1. PASS status indicates that there is no shade variation in the fabric (Histogram = 99)
2. PASS status indicates that there is no defect in the fabric(Max. Defective area =0.0 mm^2)
3. PASS status for the fabric image with no defect

(a)
1 2

1. FAIL status indicates the shade variation in the fabric (Histogram = 70)
2. FAIL status indicates the defect in the fabric (Max. Defective area = 5.6 mm^2 )
3. FAIL status for the fabric image with defect

(b)
1 2

1. FAIL status indicates the shade variation in the fabric (Histogram = 69)
2. FAIL status indicates the defect in the fabric (Max. Defective area = 1.6 mm^2 )
3. FAIL status for the fabric image with defect

(c)

13
Multimedia Tools and Applications (2024) 83:65753–65772 65771


Fig. 9  Defect identified images for the experimental setup (a) the image with no defect, the shade varia-
tion value and the maximum defect detection area value of 99 & 0.0 mm^2 respectively, (b) the image with
defect. The shade variation value and the maximum defect detection area value are 70 & 5.6 mm^2 respec-
tively and (c) the image with defect

5 Conclusion and future work

In this paper, a SVM classifier based on TDD algorithm with LabVIEW environment is
proposed to characterize visual fabric defect detection. The TDD algorithm converts
the defect image into gray scale image, and then the image is segmented using wavelet
decomposition method. Haralick features are extracted using GLCM method. The perfor-
mance is calculated using the SVM classifier. To validate the TDD algorithm, the multi-
class TILDA dataset is employed. Comprehensive validation results are 95.50%, 96.25%,
97.75% and 98% for c1, c2, c3, and c4 class respectively. Furthermore, the TDD algorithm
has the mean overall precision score of 96.56% for all the four classes.This proposed work
is limited to the running fabrics with or without print on the entire fabric with four main
classes of fabric defects. Future work may be extended for all types of defects and all types
of fabrics including T-shirt printing.

Data availability All the details about data availability are mentioned within this manuscript.

Declarations
Conflict of interest Authors declare no conflict of interest. This project is sanctioned by Department of Sci-
ence and Technology-State Science and Technology (DST-SSTP) Programme (DST/SSTP/2018/232).

References
1. Karlekar VV, Biradar MS, Bhangale KB (2015) Fabric defect detection using wavelet filter. In: 2015
International Conference on Computing Communication Control and Automation. ICCUBEA, Pune,
India, pp 712–715. https://​doi.​org/​10.​1109/​ICCUB​EA.​2015.​145
2. Santhosh KK, Tamil SR, Uthaya KM, Jaya VP, Finney DS (2020) Defect detection in fabrics using
modified CNN. Waffen-Und Kostumkunde Journal, XI(VI) 233–236. https://​doi.​org/​10.​11205/​WJ.​
2020.​V11I6.​05.​100937
3. Garg M, Dhiman G (2020) Deep convolution neural network approach for defect inspection of textured
surfaces. J Inst Electron Comput 2:28–38. https://​doi.​org/​10.​33969/​JIEC.​2020.​21003
4. Li H, Zhang H, Liu L, Zhong H, Wang Y, Jonathan Wu QM (2020) Integrating deformable convolu-
tion and pyramid network in Cascade R-CNN for fabric defect detection. In: 2020 IEEE International
Conference on Systems, Man, and Cybernetics (SMC). Toronto, ON, Canada, pp 3029–3036. https://​
doi.​org/​10.​1109/​SMC42​975.​2020.​92828​75
5. Das S, Wahi A, Keerthika S, Thulasiram N (2020) Defect Analysis of Textiles Using Artificial Neu-
ral Network. Curr Trends Fashion Technol Textile Eng 6(1):01–05. https://​doi.​org/​10.​19080/​CTFTTE.​
2020.​06.​555677
6. Chaudhari C, Gupta RK, Fegade S (2020) A hybrid method of textile defect SVD and wavelet trans-
form. Int J Recent Technol Eng 8(6):5356–5360. https://​doi.​org/​10.​35940/​ijrte.​F9569.​038620
7. XingzhiChang CG, Liang J, Xin X (2018) Fabric defect detection based on pattern template correction
Hindawi. Math Prob Eng 2018(3709821):18. https://​doi.​org/​10.​1155/​2018/​37098​21
8. Gharsallah, M. B., & Braiek, E. B. (2020). A visual attention system based on an anisotropic diffusion
method for effective textile defect detection. Journal of Textile Institute. https://​doi.​org/​10.​1080/​00405​
000.​2020.​18506​13

13
65772 Multimedia Tools and Applications (2024) 83:65753–65772

9. Dong Y, Wang J, Li C, Liu Z, Xi J, Zhang A (2020) Fusing multilevel deep features for fabric defect
detection based NTV-RPCA. IEEE Access 8:161872–161883. https://​doi.​org/​10.​1109/​ACCESS.​2020.​
30214​82
10. Zhang K, Yan Y, Li P, Jing J, Wang Z, Xiong Z (2020) Fabric defect detection using saliency of multi-
scale local steering kernel. IET Image Processing, 14(7):1265–1272. IET Digital Library. https://​doi.​
org/​10.​1049/​iet-​ipr.​2018.​5857
11. Khowaja A, Nadir D (2019) Automatic fabric fault detection using image processing. In: 2019
13th International Conference on Mathematics, Actuarial Science, Computer Science and Statistics
(MACS). Karachi, Pakistan, pp 1–5. https://​doi.​org/​10.​1109/​MACS4​8846.​2019.​90247​76
12. Wang H, Duan F, Zhou W (2020) Fabric defect detection under complex illumination based on an
improved recurrent attention model. J Textile Inst. https://​doi.​org/​10.​1080/​00405​000.​2020.​18099​18
13. Schulz-Mirbach H (1996) A reference data set for evaluating visual inspection procedures for textile
surfaces. Technical University of Hamburg-Harburg, Technical Informatics I, TILDA Textile Texture-
Database, Pattern Recognition and Image Processing. Version 1.0
14. Rasheed A, Zafar B, Rasheed A, Ali N, Sajid M, Dar SH, Habib U, Shehryar T, Tariq M (2020) Fabric
defect detection using computer vision techniques a comprehensive review. Math Probl Eng 2020:01–
24. https://​doi.​org/​10.​1155/​2020/​81894​03
15. Jing J-F, Ma H, Zhang H-H (2019) Automatic fabric defect detection using a deep convolutional neural
network. Color Technol 135(3):213–223. https://​doi.​org/​10.​1111/​cote.​12394
16. Zhang L, Jing J, Zhang H (2015) Fabric Defect Classification Based on LBP and GLCM ⋆. J Fiber
Bioeng Inform 8:81–89. https://​doi.​org/​10.​3993/​jfbi0​32015​08
17. Ben Salem Y, Nasri S (2009) Texture classification of woven fabric based on a GLCM method and
using multiclass support vector machine. In: 2009 6th International Multi-Conference on Systems, Sig-
nals and Devices pp 1–8. https://​doi.​org/​10.​1109/​ssd.​2009.​49567​37
18. Salem YB, Abdelkrim MN (2020) Texture classification of fabric defects using machine learning. Int J
Electr Comput Eng (IJECE) 10(4):4390–4399. https://​doi.​org/​10.​11591/​ijece.​v10i4.​pp4390-​4399
19. Jeyaraj PR, Nadar ERS (2020) Effective textile quality processing and an accurate inspection system
using the advanced deep learning technique. Text Res J 90(9–10):971–980. https://​doi.​org/​10.​1177/​
00405​17519​884124
20. Jeyaraj PR, Nadar ERS (2019) Computer vision for automatic detection and classification of fabric
defect employing deep learning algorithm. Int J Clothing Sci Technol 31(4):510–521. https://​doi.​org/​
10.​1108/​IJCST-​11-​2018-​0135
21. Deotale NT, Sarode TK (2019) Fabric defect detection adopting combined GLCM, gabor wavelet fea-
tures and random decision forest. Research 10(1):1–13. https://​doi.​org/​10.​1007/​s13319-​019-​0215-1
22. Kavin Kumar K, Meera Devi T, Maheswaran S (2018) An efficient method for brain tumor detection
using texture features and SVM classifier in MR Images. Asian Pac J Cancer Prev 19(10):2789–2794.
https://​doi.​org/​10.​22034/​APJCP.​2018.​19.​10.​2789
23. ManojSenthil K, Meeradevi T (2017) Performance analysis of feature-based lung tumor detection and
classification. Curr Med Imaging 13(3):339–347. https://​doi.​org/​10.​2174/​15734​05612​66616​07250​
93958
24. Liu Q, Wang C, Li Y, Gao M, Li J (2022) A Fabric Defect Detection Method Based on Deep Learning.
IEEE Access 10:4284–4296. https://​doi.​org/​10.​1109/​ACCESS.​2021.​31401​18
25. Jia Z, Shi Z, Quan Z, Shunqi M (2022) Fabric defect detection based on transfer learning and improved
Faster R-CNN. Journal of Engineered Fibers and Fabrics 17. https://​doi.​org/​10.​1177/​15589​25022​
10866​47
26. Berironaghi A, Ren J, El-Gindy M (2023) Defect detection methods for industrial products using deep
learning techniques: a review. Algorithms 16(2):95. https://​doi.​org/​10.​3390/​a1602​0095

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and applicable
law.

13

You might also like