0% found this document useful (0 votes)
3 views9 pages

Real-Time Solid Waste Sorting Machine Based On Dee

This paper presents a real-time solid waste sorting machine that utilizes deep learning techniques for autonomous waste segregation, specifically targeting paper and plastic waste in educational institutions. The machine employs a convolutional neural network model on a Raspberry Pi for image recognition, achieving high accuracy in waste classification. The study highlights the machine's simplicity, cost-effectiveness, and its potential to enhance recycling efforts by engaging students in sustainable practices.

Uploaded by

zea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views9 pages

Real-Time Solid Waste Sorting Machine Based On Dee

This paper presents a real-time solid waste sorting machine that utilizes deep learning techniques for autonomous waste segregation, specifically targeting paper and plastic waste in educational institutions. The machine employs a convolutional neural network model on a Raspberry Pi for image recognition, achieving high accuracy in waste classification. The study highlights the machine's simplicity, cost-effectiveness, and its potential to enhance recycling efforts by engaging students in sustainable practices.

Uploaded by

zea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Real-Time Solid Waste Sorting Machine

Based on Deep Learning


Original Scientific Paper

Imane Nedjar*
University of Tlemcen,
Biomedical Engineering Laboratory
Ecole Supérieure en Sciences Appliquées de Tlemcen, ESSA-Tlemcen,
BP 165 RP Bel Horizon, Tlemcen 13000, Algeria
[email protected]

Mohammed M’hamedi
University of Tlemcen,
Faculty of Sciences, Department of Computer Science
Ecole Supérieure en Sciences Appliquées de Tlemcen, ESSA-Tlemcen,
BP 165 RP Bel Horizon, Tlemcen 13000, Algeria
[email protected]

Mokhtaria Bekkaoui
University of Tlemcen,
Manufacturing Engineering Laboratory of Tlemcen
Ecole Supérieure en Sciences Appliquées de Tlemcen, ESSA-Tlemcen,
BP 165 RP Bel Horizon, Tlemcen 13000, Algeria
[email protected]
*Corresponding author

Abstract – The collection and separation of solid waste represent crucial stages in recycling. However, waste collection currently
relies on static trash bins that lack customization to suit specific locations. By integrating artificial intelligence into trash bins, we can
enhance their functionality. This study proposes the implementation of a sorting machine as an intelligent alternative to traditional
trash bins. This machine autonomously segregates waste without human intervention, utilizing deep learning techniques and
an embedded edge device for real-time sorting. Deploying a convolutional neural network model on a Raspberry Pi, the machine
achieves solid waste identification and segregation via image recognition. Performance evaluation conducted on both the Stanford
dataset and a dataset we created showcases the machine's high accuracy in detection and classification. Moreover, the proposed
machine stands out for its simplicity and cost-effectiveness in implementation.

Keywords: Waste, deep leaning, raspberry pi, artificial intelligence, sorting machine
Received: March 15, 2024; Received in revised form: June 12, 2024; Accepted: June 12, 2024

1. INTRODUCTION Bank underscores the urgency of implementing ef-


fective solid waste management practices to achieve
The surge in population and industrialization has led
sustainable development goals, which prioritize waste
to a significant increase in daily waste production. Ac-
reduction and recycling [2].
cording to statistics from the World Bank, global mu-
nicipal solid waste generation exceeds 2 billion tons Recycling is crucial for ecological preservation and
annually, a figure expected to soar to 3.4 billion tons promoting circular economies [3]. However, its benefits
by 2050 [1]. are limited because only 13.5% of global waste is recy-
Waste in gaseous, liquid, or solid form poses a sig- cled, primarily due to insufficient collection and sorting
nificant environmental threat if not properly managed infrastructure. Furthermore, 33% of waste is openly dis-
and segregated. The Senior Director of the Social, Ur- carded without preliminary sorting, while mixed waste
ban, Rural, and Resilience Global Practice at the World disposal remains prevalent [4].

Volume 15, Number 7, 2024 581


While manual sorting persists in some waste man- saturation of the dustbin. The authors employed the
agement systems, it introduces various challenges. GSM technology to alert the authorities to manage the
These include the risk of contamination from bacteria dustbin. Additionally, convolutional neural networks
and viruses, the requirement for a large workforce, and have been integrated into waste bins for efficient trash
the associated expenses of training and oversight [5]. segregation [30, 31].
Developed nations are steadily adopting automated Despite ongoing research, the current waste collec-
waste management systems to facilitate the advance- tion and sorting involves using trash bins equipped
ment of intelligent and sustainable cities. These sys- with labels or colors to help individuals correctly dis-
tems harness cutting-edge technologies like robotics pose of waste into designated containers. However,
[6], artificial intelligence [7], and the Internet of Things variations in these labels across countries can confuse
[8], sparking considerable research interest in this field.
some individuals.
In robotics, Aitken et al. [9] have devised an automated
Moreover, citizens frequently make errors in their
system for nuclear waste treatment utilizing a robotic
waste disposal practices. Another limitation of existing
arm. This system aims to execute tasks that are repetitive
trash bins is their lack of customization based on specif-
and hazardous for humans. On the other hand, Gupta et
al. [10] have presented a cost-efficient solution for defin- ic deployment locations. For instance, waste generated
ing routes for the mobile autonomous robot assigned in hospitals differs significantly from that produced in
to litter emptying. This proposal aims to alleviate the stadiums, public gardens, educational institutions, and
impact on workers' health, reduce greenhouse gas emis- other settings.
sions, and minimize operational expenses. In this research, our focus is on educational institu-
Given the success of artificial intelligence in various tions, where we propose the implementation of a sort-
fields like medicine [11], biology [12], and environment ing machine as an alternative to conventional trash
[13], recently several researchers have conducted stud- bins. We designed this machine to autonomously seg-
ies on using AI for automatic waste management. For regate waste without requiring human intervention.
example, Majchrowska et al. [4] used deep learning Since plastic bottles and paper are the primary waste
to detect waste in natural and urban environments. generated in educational institutions and universities,
Furthermore, the studies presented by [14-18] have efficiently collecting and sorting these items has be-
proposed a deep-learning model for waste classifica- come crucial for streamlining the recycling process.
tion. Mittal et al. [19] created a smartphone applica-
tion called Spotgarbage, which utilizes a Convolutional Plastic waste constitutes a significant threat to the
Neural Network (CNN) and enables citizens to monitor environment. The United Nations Environment Pro-
the cleanliness of their neighborhood by tracking gar- gramme (UNEP) estimated that there could be more
bage. In contrast, The dangers posed by medical waste, plastic than fish in the ocean by 2050 [32]. Plastic waste
such as viruses and bacteria, motivated Zhou et al. not only affects our oceans but also infiltrates our food
[5] to propose a classification method based on deep supply as microplastics and nanoplastics, posing a sig-
learning for medical waste classification. This method nificant threat to human health. Paper is also a part of
identifies eight types of medical waste: gauze, gloves, our daily waste and impacts the environment. Discard-
infusion bags, bottles, infusion apparatus, syringes, ing paper due to printing errors, packaging, and ad-
needles, and tweezers. Machine learning methods are vertising posters is a daily practice that we engage in
employed in municipal solid waste classification [20] without fully recognizing its ecological and economic
and are also utilized in container management through impact.
sensor measurements [21, 22]. Yang and Thung [23]
combined machine learning and deep learning tech- The primary contributions of this article include:
niques by using the support vector machine and • Introducing the structure of the sorting machine
convolutional neural network to classify waste into along with its operational diagram.
six classes: glass, paper, metal, plastic, cardboard, and
• Evaluating two mobile architectures — MobileNet
trash. On the other hand, the Internet of Things (IoT)
and NASNet-Mobile — as the backbone for our fi-
technology has been employed with both machine
learning in [24] and deep learning for waste manage- nal model.
ment systems in [25]. • Sharing the paper and plastic waste images data-
set, and the Python source code of the machine.
A growing trend in automatic waste management
involves the implementation of Smart Waste Bins [26]. The manuscript is structured as follows: The Materials
Initially, researchers suggested improving waste bin section presents an overview of the machine's compo-
control through level detection sensors [27, 28] and uti- nents and functionality. In the Method section, we will
lizing remote control via mobile applications or Global elaborate on the convolutional neural network model
System for Mobile Communications (GSM) technology. utilized in our proposal. We will present the evaluation
For example, Monika et al. [29] employed an intelligent results in the Results and Discussion section. Section 5
bin equipped with an ultrasonic sensor to monitor the presents the conclusion of the article.

582 International Journal of Electrical and Computer Engineering Systems


2. MATERIALS ral network (CNN) model identifies and classifies the
waste type. The servomotor then swings the support
2.1. General overview that holds the waste towards the appropriate contain-
of the prototype machine er. The machine contains two compartments: one for
paper waste and the other for plastic bottles.
The machine aims to identify and categorize paper
To further encourage students to recycle waste, we
and plastic bottles waste. By doing so, the machine can
designed the prototype machine to provide a reward
aid in simplifying the waste segregation process for
after throwing a fixed number of plastic bottles, for ex-
municipal corporations. We chose these two types of
ample, nb=4. A box containing pens opens, allowing
waste due to their high frequency of being discarded
them to claim a pen as a reward.
in educational institutions and their recyclable nature.
This approach creates an educational and interac-
The prototype machine works as follows:
tive way to promote waste segregation and recycling,
As a student approaches the waste disposal machine, encouraging students to engage in sustainable prac-
a motion sensor detects their presence, and the ma- tices. Fig. 1 presents the UML state machine diagram
chine’s window opens to allow them to dispose of their that illustrates the various states and the responses to
waste. After the disposition of the waste, a Raspberry different events. We create this diagram using Astah
Pi camera captures an image, and a convolutional neu- software.

Fig. 1. The state machine diagram

2.2. Machine composition The source code for the prototype machine is acces-
sible at the following link: https://github.com/Nedjar-
The machine has two main components: the elec- Imane/Sorting-Machine/tree/main
tronic component box and two physical parts.
The top part of the machine handles the processing
tasks, including the detection, recognition, and classifi-
cation of waste.
The bottom part houses the containers designated
for paper and plastic waste. The design of the proto-
type machine shown in Fig. 2 was modeled using Solid-
Works software.
• Electronic component box
The electronic components used in the prototype
machine, including the Raspberry Pi, Pi Camera, and
servomotors, are housed in the electronic component
box for protection. Fig. 3 shows the electronic circuit of
the prototype machine. Fig. 2. Machine’s design

Volume 15, Number 7, 2024 583


The Stanford dataset contains 594 images of paper
and 319 images of plastic bottles. To balance the data-
set, we augmented the number of plastic bottle images
to 594 using techniques such as rotation and zoom.

(a) (b)
Fig. 4. Images from our dataset with different
backgrounds (a) Paper, (b) Plastic bottle
Fig. 3. Electronic circuit of the prototype machine
created using FreeCAD software 3.2. Convolutional neural network
model
• The upper part of the prototype machine
The remarkable achievements of CNN-based archi-
Window: the upper part of the prototype machine
tectures are notably outstanding, particularly in com-
features an opening where students can deposit the
puter vision, where accuracy levels often approach
waste. An obstacle avoidance sensor (E18-D80NK) is
perfection.
placed at the top of the window to detect incoming ob-
jects. The servomotor (Metal gears RG996R) operates Our system is specifically tailored for real-time de-
the window to ensure smooth and efficient operation. tection and classification of paper and plastic, utiliz-
ing CNN for these tasks. In this study, we opted for the
Sorting board: once the sensor detects an object, the
MobileNet and NASNet-Mobile neural network archi-
window opens automatically, and the user can throw
tectures for classification, chosen for their suitability in
their waste into the machine. There is a sorting board
the context of mobile and embedded devices. Several
inside the prototype machine. It turns to the right for
recent studies have used these neural network archi-
plastic waste and the left for paper waste.
tectures, such as [33, 34], in addition to real-time video
Box of pens: we developed this prototype machine applications where processing speed is crucial [35, 36].
for educational institutions to encourage students to
• MobileNet
recycle waste. When a student throws a specified num-
ber of plastic bottles, the box of pens opens, allowing Google's MobileNets architecture [37] is tailored for
the student to take a pen. The servomotor operates the mobile and embedded vision applications due to its
opening mechanism of the box. lightweight design. Its efficiency stems from the dep-
thwise separable convolutions instead of full convo-
• The lower part of the prototype machine
lutions. MobileNet introduces two parameters: Width
This part is designed to sort the waste into two con- Multiplier (α) and Resolution Multiplier (ρ), which en-
tainers. hance the architecture's flexibility.
• NASNet-Mobile
3. PROPOSED METHOD
Neural Architecture Search Network aims to discov-
3.1. Dataset er an optimal CNN architecture using reinforcement
learning. NASNet is a technique developed at Google
We have collected and organized our dataset titled Brain for searching through a space of neural network
'Plastic and Paper Waste' by taking photos with mobile configurations [38]. The optimized version, based on
phones in our homes and at the university (see Fig. 4). Normal and Reduction-Cells, is known as NASNet-Mo-
bile. Normal Cells are convolutional cells that return a
The 'Plastic and Paper Waste' dataset contains 400
feature map of the same dimension as the input, while
images for each class, encompassing diverse papers
Reduction Cells are convolutional cells that reduce
and plastic bottles captured in various positions, states,
the feature map’s height and width by a factor of two.
lighting conditions, and backgrounds.
These cells are combined to create the complete neural
In our experiment, we also used the Stanford data- network optimized for a specific task while minimizing
set [23], which includes images of trash against a white the computational resources needed for training and
background organized into six classes. inference.

584 International Journal of Electrical and Computer Engineering Systems


3.3. Raspberry Pi and TensorFlow Lite These extensions consist of a Global Average Pooling
layer, a Dense layer, and a Dropout layer.
• Raspberry Pi
The stochastic gradient descent optimizer [40] was
Is is a single-board computer developed by the Rasp-
used, with a momentum equal to 0.9, a learning rate
berry Pi Foundation. The Raspberry Pi boards are about
of 1e-4, and 20 epochs for training. We used the cross-
the size of a credit card and feature a range of input/
entropy loss function L, as shown in equation (6), which
output pins that connect to sensors, motors, and other
increases when the predicted probability diverges from
electronic components.
the correct label.
• TensorFlow Lite
(6)
It is a lightweight framework for deploying deep
learning models on mobile and embedded devices. It is where y is a binary indicator, with values of 0 or 1, de-
an optimized version of the popular TensorFlow library. noting whether the class label 'c' accurately identifies
With TensorFlow Lite, models can run locally on the de- observation 'o'. Similarly, p represents the predicted
vice without relying on cloud-based services, allowing probability of observation 'o' belonging to the 'c' class.
for real-time processing and lower latency.
The experiment consisted of testing each dataset
In this work, we installed the proposed system on a individually, followed by combining them.We split the
Raspberry Pi to ensure real-time detection and classi- datasets into a training set (80%) and a validation set
fication. Additionally, we have used the library Tensor- (20%). Fig. 5 shows that both models achieved accu-
Flow Lite as background for the CNN model. racy levels above 96% and 98%.

3.4. Evaluation metrics

The measures considered for evaluating the CNN


models rely on various metrics, including accuracy, pre-
cision, recall, F1 score, and kappa statistic.

(1)

(2)

(3) Fig. 5. The accuracy obtained for each dataset

MobileNet outperformed NASNet-Mobile in all da-


(4)
tasets. We combined the Stanford and the proposed
where: TP refers to True Positive, and TN refers to True dataset to address the overfitting issue in classification.
Negative, which indicates the correct classification of MobileNet achieved the highest score of 99.50% with
plastic bottles and paper images. an error rate of 0.0136 on the combined dataset (see
On the other hand, FP refers to False Positive, and FN Fig. 6 and Fig. 7 ). In Table 1, we have compared the
refers to False Negative, which indicates the misclassifi- metric values of kappa, precision, recall, and F1 score
cation of plastic bottles and paper images. for MobileNet and NASNet-Mobile. The results ob-
tained showed that MobileNet outperforms NASNet-
Mobile for all the metrics used. Based on the results of
(5)
the experiments, MobileNet has been chosen as the
base model for our machine (see Fig. 8 ).
The Kappa Statistic [39] is calculated from the vari-
ance between the observed agreement (Po) and the
expected agreement (Pe), highlighting the difference
between the actual agreement and what would be ex-
pected by chance alone.

4. RESULTS AND DISCUSSION

We conducted the evaluation using both our dataset


and the Stanford dataset. During the training process,
we applied transfer learning, whereby we initialized
the CNNs with pre-trained ImageNet weights.
We also employed a fine-tuning strategy to improve Fig. 6. The accuracy of MobileNet on the combined
the prediction by adding extension layers to the CNNs. dataset

Volume 15, Number 7, 2024 585


To enhance the convergence and generalization per-
formance of the model, we applied Cyclical Learning
Rates (CLR) to our model.
The concept behind CLR is to discover an optimal
learning rate schedule by systematically varying the
learning rate throughout the training process. Instead
of using a fixed learning rate in training neural net-
works, CLR dynamically adjusts the learning rate cycli-
cally, oscillating between a minimum and maximum
value over a predefined number of iterations [41].
Fig. 7. The loss of MobileNet on the combined We have chosen a minimum learning rate of 1e-4 and
dataset a maximum learning rate of 1e-1.

Table 1. Kappa, Precision, Recall and F1 score We present the CLR schedule obtained in Fig. 9.
obtained for combined dataset Applying CLR to our model has improved the accuracy
Kappa Precision Recall F1 score obtained while accelerating the training process. Fig. 10
Paper Plastic Paper Plastic Paper Plastic shows that all the metrics have improved, reaching 100%.
MobileNet 0.98 0.99 1.00 1.00 0.99 0.99 0.99
NASNet-
0.94 0.96 0.98 0.98 0.96 0.97 0.97
Mobile

Fig. 9. Cyclical Learning Rates schedule

Fig. 10. The metrics obtained using the SGD


optimizer with and without CLR on our model for
the combined dataset

From 398 test images, which included 199 images of


plastic bottles and 199 paper images, there were only
two misclassified images by our model without CLR.
We obtained a paper precision of 99%, plastic bottle
precision of 100%, paper recall of 100%, plastic bottle
recall of 99%, Kappa of 98%, and F1 score of 99%. In
addition, When we used the proposed model with CLR,
the two misclassified images were correctly classified,
resulting in an improvement in the values of all metrics,
reaching 100%.
Since our model runs on a Raspberry Pi, balancing per-
Fig. 8. The architecture of our model based on formance and computation time was crucial. To achieve
MobileNet this, we opted for TensorFlow Lite instead of TensorFlow.

586 International Journal of Electrical and Computer Engineering Systems


In real-time, the Raspberry Pi camera captures an im- To pave the way for potential improvements and
age of the waste when the student deposits it into the industrial realization, we have made the machine's
prototype machine. The proposed model then classifies source code readily accessible.
the waste based on the image, and finally, the prototype
Data Availability Statement
machine directs the waste to the appropriate container.
The dataset titled 'Plastic and Paper Waste' is avail-
Several researchers have focused on developing
able on GitHub at: https://github.com/Nedjar-Imane/
models for waste recognition, including medical waste
Sorting-Machine/tree/main/Datasets
[5], construction and demolition waste [42], and mu-
nicipal solid waste [14-18].
6. REFERENCES
In this work, we propose both the model for recog-
nition and classification and a prototype machine to [1] S. Kaza, L. Yao, P. Bhada-Tata, F. Van Woerden, “What
make the idea more practical and feasible. a waste 2.0: a global snapshot of solid waste man-
agement to 2050”, https://openknowledge.world-
4.1. Future perspective
bank.org/handle/10986/2174 (accessed: 2024)
Our idea is to enhance the existing trash bins with
intelligent machines capable of detecting, recognizing, [2] WorldBank.org, “What a Waste: An Updated Look
and sorting waste. into the Future of Solid Waste Management”,
The proposed machine has been designed initially https://www.worldbank.org/en/news/immersive-
for educational institutions, but its application is not story/2018/09/20/what-a-waste-an-updated-
limited to them; it can also adapted for use in public look-into-the-future-of-solid-waste-management
spaces and even in houses. For this purpose, certain
modifications need to be made to the machine’s sys- (accessed: 2024)
tem and mechanism. [3] G. Rozing, D. Jukić, H. Glavaš, M. Žnidarec, “Recy-
The machine system must identify and classify other clability and ecological-economic analysis of a
types of waste, such as glass, metal, organic waste, food simple photovoltaic panel”, International Journal
scraps, and non-recyclable waste. The machine can be
customized for the place where it will be used. For exam- of Electrical and Computer Engineering Systems,
ple, if we want to use the machine in healthcare facilities, Vol. 15, No. 1, 2022, pp. 99-104.
biomedical waste generated, such as needles, syringes,
and other medical equipment, must be included. [4] S. Majchrowska, A. Mikołajczyk, M. Ferlin, Z.
Klawikowska, M. A. Plantykow, A. Kwasigroch, K.
In our proposed prototype machine the sorting
board moves the waste to the appropriate container. Majek, “Deep learning-based waste detection in
In cases where there are more than two types of waste, natural and urban environments”, Waste Manage-
the machine needs to have a rotation mechanism that ment, Vol. 138, 2022, pp. 274-284.
allows the containers to rotate, allowing the sorting
board to direct the waste into the correct container. [5] H. Zhou et al. “A deep learning approach for medi-
cal waste classification,” Scientific Reports, Vol. 12,
5. CONCLUSION
No. 1, 2022, p. 2159.
Intelligent waste management is considered a viable
[6] A. G. Satav, S. Kubade, C. Amrutkar, G. Arya, A.
solution for achieving sustainable development goals. In
this study, we introduce the design of a real-time sorting Pawar, “A state-of-the-art review on robotics in
prototype machine that leverages artificial intelligence waste sorting: scope and challenges”, Internation-
for effective solid waste collection and separation. al Journal on Interactive Design and Manufactur-
The machine comprises physical components for ing, Vol. 17, No. 6, 2023, pp. 2789-2806.
waste sorting and a software component for identi-
fication and classification. The physical components [7] B. Fang et al. “Artificial intelligence for waste man-
incorporate an object detection sensor (E18-D80NK), agement in smart cities: a review”, Environmental
servomotors (Metal Gears RG996R) for movement, and Chemistry Letters, Vol. 21, No. 4, 2023, pp. 1959-
a Raspberry Pi for real-time detection and classification.
1989.
To identify and classify waste, we tested two baseline
models, namely MobileNet and NASNet-Mobile, on [8] T. Anagnostopoulos, A. Zaslavsky, K. Kolomvat-
both the Stanford dataset and our proposed data-
sos, A. Medvedev, P. Amirian, J. Morley, S. Hadjief-
set. The final model chosen was based on MobileNet,
achieving an accuracy of 99.50% without employing a tymiades, “Challenges and opportunities of waste
cyclical learning rate and 100% when we used it. management in IoT-enabled smart cities: a sur-

Volume 15, Number 7, 2024 587


vey”, IEEE Transactions on Sustainable Computing, neural networks”, International Journal of Environ-
Vol. 2, No. 3, 2017, pp. 275-289. mental Science and Technology, Vol. 19, 2022, pp.
1285-1292.
[9] J. M. Aitken et al. “Autonomous nuclear waste
management”, IEEE Intelligent Systems, Vol. 33, [19] G. Mittal, K. B. Yagnik, M. Garg, N. C. Krishnan, “Spot-
No. 6, 2018, pp. 47-55. garbage: smartphone app to detect garbage us-

[10] A. Gupta, M. J. Van der Schoor, J. Bräutigam, V. B. ing deep learning”, Proceedings of the ACM In-

Justo, T. F. Umland, D. Goehlich, “Autonomous ser- ternational Joint Conference on Pervasive and
vice robots for urban waste management-multia- Ubiquitous Computing, Association for Comput-
gent route planning and cooperative operation”, ing Machinery, Heidelberg, Germany, 12-16 Sep-
IEEE Robotics and Automation Letters, Vol. 7, No 4, tember 2016, pp 940-945.
2022, pp. 8972-8979. [20] S. Chaturvedi, B. P. Yadav, N. A. Siddiqui, “An as-
[11] T. Schaffter et al. “Evaluation of combined artificial sessment of machine learning integrated autono-
intelligence and radiologist assessment to inter- mous waste detection and sorting of municipal
pret screening mammograms”, JAMA Network solid waste”, Nature Environment & Pollution
Open, Vol. 3, No. 3, 2020, pp. e200265-e200265. Technology, Vol. 20, No. 4, 2021, pp. 1515-1525.

[12] B. Richards, D. Tsao, A. Zador, “The application of ar- [21] D. Rutqvist, D. Kleyko, F. Blomstedt, “An automated
tificial intelligence to biology and neuroscience”, machine learning approach for smart waste man-
Cell, Vol. 185, No. 15, 2022, pp. 2640-2643. agement systems”, IEEE Transactions on Industrial
Informatics, Vol. 16, No. 1, 2020, pp. 384-392.
[13] V. Hocenski, A. Lončarić Božić, N. Perić, D. Klapan,
Ž. Hocenski, “Environmental impact estimation of [22] S. Dubey, P. Singh, P. Yadav, K. K. Singh, “Household
ceramic tile industry using modeling with neural waste management system using IoT and ma-
networks”, International Journal of Electrical and chine learning”, Procedia Computer Science, Vol.
Computer Engineering Systems, Vol. 13, No. 1, 167, 2020, pp. 1950-1959.
2022, pp. 29-39. [23] M. Yang, G. Thung, “Classification of trash for recy-
[14] C. Shi, C. Tan, T. Wang, L. Wang, “A waste classifica- clability status”, Stanford University, Stanford, CA,
tion method based on a multilayer hybrid convo- USA, CS229 project report, 2016.
lution neural network”, Applied Sciences, Vol. 11, [24] R. Khan, S. Kumar, A. K. Srivastava, N. Dhingra, M.
No. 18, 2021, p. 8572. Gupta, N. Bhati, P. Kumari, “Machine learning and
[15] M. Al Duhayyim, T. A. Elfadil Eisa, F. N. Al-Wesabi, A. IoT-based waste management model”, Computa-
Abdelmaboud, M. A. Hamza, A. S. Zamani, M. Riz- tional Intelligence and Neuroscience, Vol. 2021,
wanullah, R. Marzouk, “Deep reinforcement learn- No. 1, 2021, p. 5942574.
ing enabled smart city recycling waste object [25] M. W. Rahman, R. Islam, A. Hasan, N. I. Bithi, M.
classification”, Computers, Materials & Continua, M. Hasan, M. M. Rahman, “Intelligent waste man-
Vol. 71, No. 3, 2022, pp. 5699-5715. agement system using deep learning with IoT”,
[16] A. Mitra, “Detection of waste materials using deep Journal of King Saud University-Computer and In-
learning and image processing”, University San formation Sciences, Vol. 34, No.5, 2022, pp. 2072-
Marcos, California State, USA, Master Thesis, 2020. 2087.

[17] M. Malik, S. Sharma, M. Uddin, C. L. Chen, C. M. Wu, [26] A. Noiki, S. A. Afolalu, A. A. Abioye, C. A. Bolu, M. E.
P. Soni, S. Chaudhary, “Waste classification for sus- Emetere, “Smart waste bin system: a review”, Pro-
tainable development using image recognition ceedings of the 4th International Conference on
with deep learning neural network models”, Sus- Science and Sustainable Development, Ota, Nige-
tainability, Vol. 14, No. 12, 2022, p. 7222. ria, 3-5 August 2020, p. 012036.

[18] A. Altikat, A. Gulbe, S. Altikat, “Intelligent solid [27] Y. Zhao, S. Yao, S. Li, S. Hu, H. Shao, T. F. Abdelzaher,
waste classification using deep convolutional “VibeBin: A vibration-based waste bin level detec-

588 International Journal of Electrical and Computer Engineering Systems


tion system”, Proceedings of the ACM on Interac- models for driver fatigue detection”, Proceedings
tive, Mobile, Wearable and Ubiquitous Technolo- of the 7th International Conference on Image and
gies, Vol. 1, No 3, 2017, pp. 1-22. Signal Processing and their Applications, Mostag-
anem, Algeria, 8-9 May 2022, pp. 1-6.
[28] S. J. Ramson, D. J. Moni, S. Vishnu, T. Anagnosto-
poulos, A. A. Kirubaraj, X. Fan, “An IoT-based bin [36] S. Manzoor, E. J. Kim, S. H. Joo, S. H. Bae, G. G. In, K. J.
level monitoring system for solid waste manage- Joo, J. H. Choi, T. Y. Kuc, “Edge deployment frame-
ment”, Journal of Material Cycles and Waste Man- work of guardbot for optimized face mask recog-
agement, Vol. 23, 2021, pp. 516-525. nition with real-time inference using deep learn-
[29] K. Monika et al. “Smart dustbin-an efficient gar- ing”, IEEE Access, Vol. 10, 2022, pp. 77898-77921.
bage monitoring system”, International Journal of [37] A. G. Howard et al. “Mobilenets: Efficient convolu-
Engineering Science and Computing, Vol. 6, No. 6, tional neural networks for mobile vision applica-
2016, pp. 7113-7116. tions”, arXiv:1704.04861, 2017.
[30] S. Hulyalkar, R. Deshpande, K. Makode, S. Kajale, [38] B. Zoph, V. Vasudevan, J. Shlens, Q. V. Quoc V. Le,
“Implementation of smartbin using convolutional “Learning transferable architectures for scalable
neural networks”, International Research Journal image recognition”, Proceedings of the IEEE Con-
of Engineering and Technology, Vol. 5, No. 4, 2018, ference on Computer Vision and Pattern Recogni-
pp. 1-7. tion, Salt Lake City, UT, USA, 18-23 June 2018, pp.
[31] I. E. Agbehadji, A. Abayomi, K. H. N. Bui, R. C. Mill- 8697-8710.
ham, E. Freeman, “Nature-Inspired Search Method [39] A. J. Viera, J. M. Garrett, “Understanding interob-
and Custom Waste Object Detection and Classi- server agreement: the kappa statistic”, Family
fication Model for Smart Waste Bin”, Sensors, Vol. Medicine, Vol. 37, No. 5, 2005, pp. 360-363.
22, No. 16, 2022, p. 6176.
[40] M. Zinkevich, M. Weimer, L. Li, A. Smola, “Paral-
[32] UNEP.org, “UN Declares War on Ocean Plastic”, lelized stochastic gradient descent”, Advances in
https://www.unep.org/news-and-stories/press- Neural Information Processing Systems, Vol. 23,
release/un-declares-war-ocean-plastic-0 (ac- 2010.
cessed: 2024)
[41] L. N. Smith, “Cyclical learning rates for training
[33] X. Li, J. Du, J. Yang, S. Li, “When mobilenetv2 meets neural networks”, Proceedings of the IEEE Winter
transformer: A balanced sheep face recognition Conference on Applications of Computer Vision,
model”, Agriculture, Vol. 12, No. 8, 2022, p. 1126. Santa Rosa, CA, USA, 24-31 March 2017, pp. 464-
[34] D. Fangchun, L. Jingbing, A. B. Uzair, L. Jing, C. 472.
Yen-Wei, L. Dekai, “Robust Zero Watermarking [42] K. Lin, T. Zhou, X. Gao, Z. Li, H. Duan, H. Wu, G. Lu,
Algorithm for Medical Images Based on Improved
Y. Zhao, “Deep convolutional neural networks
NasNet-Mobile and DCT”, Electronics, Vol. 12, No.
for construction and demolition waste classifica-
16, 2023, p. 3444.
tion: Vggnet structures, cyclical learning rate, and
[35] I. Nedjar, H. M. Sekkil, M. Mebrouki, M. Bekkaoui, knowledge transfer”, Journal of Environmental
“A comparison of convolutional neural network Management, Vol. 318, 2022, p. 115501.

Volume 15, Number 7, 2024 589

You might also like