EEG Report Final
EEG Report Final
OF
BACHELOR OF ENGINEERING
(Computer Engineering)
BY
1
JSPM Narhe Technical Campus
DEPARTMENT OF COMPUTER ENGINEERING
CERTIFICATE
This is to certify that the BE Project Report entitled
Submitted by
is a bonafide work carried out by them under the supervision of Prof. Ms. M.S
Namose and it is submitted towards partial fulfillment of the requirement of Savitribai
Phule Pune University, Pune, for the award of the degree of Bachelor of Engineering
(Computer Engineering).
Place: Pune
Date: 18/05/22
2
ACKNOWLEDGEMENT
We would like to express our gratitude and appreciation to all those who gave
us the possibility to complete this report. Special thanks is due to our supervisor Prof.
Ms. M.S. Namose whose help, stimulating suggestions and encouragement helped us
in all time of fabrication process and in writing this report. We also sincerely thank
you for the time spent proofreading and correcting our many mistakes.
We would also like to extend our gratitude to the Head of department Dr. N.A.
Auti and also our respected Director Dr. S.A. Choudhari for providing us with all the
facilities that were required.
Lastly we would like to thank our parents and friends who have helped us with
their valuable suggestions and guidance, it has been helpful in various phases of the
completion of the project.
Sincerely,
Aadit Bagga
Exam Seat No: B151004204
Sahil Bora
Exam Seat No: B151004211
Akash Deshmukh
Exam Seat No: B151004217
Pinak Wadekar
Exam Seat No: B151004279
3
ABSTRACT
It is quite difficult for those who are paralysed or on bed rest to express them.
They experience feelings like regular individuals, yet it is difficult for them to enjoy
their time. In India, millions of individuals are afflicted with various ailments. There is
no way to keep them entertained.
The EEG (electroencephalogram) gadget is used to track a person's brain
activity. It assists in identifying the emotion of the paralysed and bed rest sick person
by obtaining electroencephalogram signals using EEG devices, utilizing those signals
as data, and applying the circumplex model. The music recommendation algorithm
enters the picture after the system has identified the emotion. Music recommendations
are made by the system based on the feeling of the crippled and bedridden patient. So
that people who are paralysed or on bed rest may enjoy and relax with the use of
entertainment as treatment. The music recommendation system is based on two
approaches: the first is expert-based music, and the second is a characteristic of the
music.
4
INDEX
Chapter 1 Synopsis 9
1.1 Project Title 10
1.2 Technical Keywords 10
1.3 Problem Statement 10
1.4 Abstract 10
1.5 Objectives 10
Chapter 2 Technical Keywords 12
2.1 Area of Project 13
2.2 Technical Keywords 14
Chapter 3 Introduction 15
3.1 Introduction 16
3.2 Motivation 16
3.3 Problem Definition 17
3.4 Objectives 17
Chapter 4 Literature Survey 18
Chapter 5 Problem Definition and Scope 22
5.1 Problem Definition 23
5.2 Project Scope 23
Chapter 6 Project Plan 24
5
Annexure A: Laboratory assignments on Project Analysis of
Algorithmic design
Annexure B: Laboratory assignments (OOMD) on Project
Quality and Reliability testing of Project Design
Annexure C: Project planner
Annexure D: Published Paper as per author guidelines and
format given by conference, Certificate of Publication
Annexure E: References (In IEEE Format)
6
LIST OF FIGURES
7
LIST OF TABLES
8
JSPM NTC, Department of Computer Engineering 2021-22
9
1.1 Project Title:
Music Recommendation with EEG signals using Machine Learning and Deep
Learning Technique
1.4 Abstract
It is quite difficult for those who are paralysed or on bed rest to express them. They
experience feelings like regular individuals, yet it is difficult for them to enjoy their
time. In India, millions of individuals are afflicted with various ailments. There is no
way to keep them entertained.
The EEG (electroencephalogram) gadget is used to track a person's brain activity. It
assists in identifying the emotion of the paralysed and bed rest sick person by
obtaining electroencephalogram signals using EEG devices, utilizing those signals as
data, and applying the circumplex model. The music recommendation algorithm enters
the picture after the system has identified the emotion. Music recommendations are
made by the system based on the feeling of the crippled and bedridden patient. So that
people who are paralysed or on bed rest may enjoy and relax with the use of
entertainment as treatment.
The music recommendation system is based on two approaches: the first is expert-
based music, and the second is a characteristic of the music.
1.5 Objectives
● First, Study of DEAP dataset to understand basics of EEG data
● Study of working of EEG scanner devices.
10
● Extracting different features of dumb person through scanner.
● Preprocessing data captured through the device.
● Classification of all signals and captured Arousal and Valence Values.
● Using Arousal and Valence values, Detect the emotion via Circumplex model
● Study the music recommendation system which recommends music according to
emotion.
11
JSPM NTC, Department of Computer Engineering 2021-22
12
2.1 Area of Project
Supervised machine learning requires the data scientist to train the algorithm with both
labeled inputs and desired outputs. Supervised learning algorithms are good for the
following tasks:
13
Models are trained by using a large set of labeled data and neural network
architectures that contain many layers.
14
JSPM NTC, Department of Computer Engineering 2021-22
15
3.1 Introduction
Emotion awareness is one of the most important subjects in the field of affective
computing. Using nonverbal behavioral methods such as recognition of facial
expression, verbal behavioral methods such as recognition of speech emotion, or
physiological signals based methods such as recognition of emotions based on
electroencephalogram (EEG) can predict human emotion. However, it is notable that
data obtained from either nonverbal or verbal behaviors are indirect emotional signals
suggesting brain activity. Unlike the nonverbal or verbal actions, EEG signals are
reported directly from the human brain cortex and thus may be more effective in
representing the inner emotional states of the brain. Consequently, when used to
measure human emotion, the use of EEG data can be more accurate than data on
behavior. For this reason, the identification of human emotion from EEG signals has
become a very important research subject in current emotional brain-computer
interfaces (BCIs) aimed at inferring human emotional states based on the EEG signals
recorded. Monitoring the patient's mental state to prevent stress-related illnesses.
Suicide is a common cause of death for both teenagers and adults. We can avoid the
emotions of suicides through user identity. Help people with mental issues by offering
them advice. To create a training model that contains physiological signals (EEG
signals from the Synchronized Dataset) useful for detecting elevated levels over long
periods of time or abrupt changes in mental exhaustion, emotional response, facial
expression and user speech to detect user emotions. Solutions to monitor signs related
to the development of stress-induced occupational diseases or prompt detection of
acute rises in stress levels in specifically hazardous work scenarios may be needed.
3.2 Motivation
In all over India, the highest number of disabled people live in the state of Uttar
Pradesh (approximately 3.6 million). There are other states like Bihar (approximately
1.9 million), West Bengal (approximately 1.8million), Tamil Nadu and Maharashtra
(approximately 1.6 million each). Tamil Nadu is the one and only state, which has
more population of disabled females than males. Other states, like Arunachal Pradesh,
have the most population of disabled males (approximately 66.6%) and lowest
population of female disabled.
16
● Disable people in India by age group: -
● 5% children are in the age group of 0 to 4 years
● 7% children are in the age group of 5 to 9 years
● 17% children or teenager are in the age group of 10 to 19 years
● 16% teenager or young people are in the age group of 20 to 29 years
● 13% young or middle age people are in the age group of 30 to 39 years
● 12% middle age people are in the age group of 40 to 49 years
● 9% old people are in the age group of 50 to 59 years
● 10% old people are in the age group of 60 to 69 years
● 7% old people are in the age group of 70 to 79 years
● So, for them it is very important to stay positive and live a happy and healthy
life. For them, create a system that help them to stay away from stress because
they are physically disable not mentally disable. So for them using EEG
Signal, First detect the emotion of the handicap Dumb person and then
according to the emotion suggest them music/movie as a therapy to stay in
positive mindset.
3.4 Objectives
● First, Study of DEAP dataset to understand basics of EEG data
● Study of working of EEG scanner devices.
● Extracting different features of dumb person through scanner .
● Preprocessing data captured through the device.
● Classification of all signals and captured Arousal and Valence Values.
● Using Arousal and Valence values, Detect the emotion via Circumplex
model
● Study the music recommendation system which recommends music
according to emotion.
17
JSPM NTC, Department of Computer Engineering 2021-22
18
4.1 Literature Survey
According to [1] a framework for learning a user's mood utilizing data from a
wearable device paired with physiological sensor signals such as galvanic skin
response (GSR), photoplethysmography (PPG), and electroencephalography (EEG), as
well as data from a camera. This data is added to the music recommendation engine as
a complement. Sensor and facial expression data may therefore increase the
recommendation engine's usefulness and accuracy.
According to [4] Using brain signals, the influence of music tracks in English and
Urdu on human stress levels. Twenty-seven people, 14 men and 13 women, with Urdu
as their first language and ages ranging from 20 to 35, volunteered to take part in the
19
research. The subjects' electroencephalograph (EEG) signals are captured as they listen
to several music tracks using a four-channel MUSE headgear. The state and trait
anxiety questionnaire asks participants to subjectively rate their stress level. The four
genres of English music that were utilized in this research are rock, metal, electronic,
and rap.
20
According to [8] a noninvasive EEG-based BCI for a robotic arm control system
that allows users to perform multitarget reach and grip tasks while avoiding obstacles
via hybrid control Seven individuals' findings showed that motor imagery (MI)
training may change brain rhythms, and six of them performed the online tasks using
the hybrid-control-based robotic arm system. The suggested system performs well
thanks to a combination of MI-based EEG, computer vision, gaze recognition, and
partly autonomous guiding, which greatly improves online task accuracy and reduces
the brain load generated by long-term mental activity.
According to [9] after preprocessing, the preprocessed data are utilized to train the
LSTM network, and then the Softmax function is used to categorize the input data into
normal and seizure data. Normalization of EEG data, application of suitable filters to
pick the useful sections of the data, and data management are the three phases of
preprocessing. The normalization of EEG data, the use of suitable filters to pick the
important sections of the data, and data management are the three phases of
preprocessing. After preprocessing, the preprocessed data are utilized to train the
LSTM network, which is then used to categorize the input data into normal and seizure
data using the Softmax function.
According to [10] they get the data from the user's text on social media using an
IoT. The text data will be analyzed for emotion detection. They offered two
approaches for music suggestion after emotion was identified. The first strategy is an
expert-based approach, in which certain experts are used to distribute music depending
on emotion. The second technique is a feature-based strategy, which does not need the
assistance of an expert. They employ music's rhythm and articulation to distribute
songs according to emotion. For the music suggestion, they devised a feedback
mechanism. As a result, the algorithm will provide music recommendations based on
user response.
21
JSPM NTC, Department of Computer Engineering 2021-22
22
5.1. Problem Definition
23
JSPM NTC, Department of Computer Engineering 2021-22
24
6.1 Project Plan
25
JSPM NTC, Department of Computer Engineering 2021-22
26
7.1 Software Requirements (Platform Choice)
Technologies and tools used in Policy system project are as follows Technology used:
Front End
● Memory:- 2 GB or above
27
JSPM NTC, Department of Computer Engineering 2021-22
28
8.1 Proposed system design
The recommended system model architecture contains the design. The user
will enter text or voice into a text or speech format, which will convert the speech into
a text, transmit the text in json for analysis to a server, compute emotions, and search
for keywords to return millions of songs, which will be filtered again under the
cooperation filter. In the first step, emotions and keywords are considered, and based
on the most likely extracted data, which other users listened to, the cold start problem
is reduced because we already have all data, which is sorted by the most popular
genres, type of music, music year, length, singer, and feelings.
8.1.1 Preprocessing
It is a method for transforming raw data into a format that is both usable and
efficient. Preprocessing is a procedure that comprises several processes that are
applied to input pictures. Images are altered throughout this process. The
reorganization of data's shape without affecting the data's substance. In this procedure,
several types of arrangements are made based on the criteria that are required to
continue to the next step.
29
Figure 8.2: Logical workflow of proposed classifier (CNN)
1: Convolutional Layer: It's the foundation for constructing a CNN model. This layer
conducted mathematical calculations on the picture that was used as input, as well as
resizing the image into the M* M format. This layer's output describes the image's
features, such as edge and corner mapping, also known as a feature map. The
information was then added to the following layer.
2: Pooling Layer: This is the layer that connects the convolutional and fully
connected layers. This layer is used to decrease the network's parameters and
computation. The max pooling and average pooling methods are provided by this
layer. The most frequent method is max pooling. The output of the preceding layer, the
30
pooling layer, is sent to the fully connected layer. This layer is where the
categorization process takes place.
3: Classification and Recommendation: This layer test classifier has been utilized
for recommendation. The input features and trained modules have fed to the test
classifier and recommended the specific song for the end user accordingly.
Step 7 : Forward feed layer to input layer for feedback FeedLayer[] {Tsf,w}
31
Testing
Input: Test Dataset which contains various test instances TestDBLits [], Train
dataset which is built by training phase TrainDBLits[] , Threshold Th.
Output: HashMap <class_label, SimilarityWeight> all instances which weight
violates the threshold score.
Step 1: For each read each test instances using below equation
𝑛𝑛
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡(𝑚𝑚) = ∑ ( 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓[𝐴𝐴[𝑖𝑖]…….. 𝐴𝐴[𝑛𝑛] 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 )
𝑚𝑚=1
Step 2 : extract each feature as a hot vector or input neuron from 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡(𝑚𝑚)
using the below equation.
𝑛𝑛
Extracted_FeatureSetx[t……n] = ∑ (𝑡𝑡) 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 (m)
𝑥𝑥=1
Extracted_FeatureSetx[t] contains the feature vector of respective domain
Step 3: For each read each train instances using below equation
𝑛𝑛
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡(𝑚𝑚) = ∑ ( 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓[𝐴𝐴[𝑖𝑖]…….. 𝐴𝐴[𝑛𝑛] 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 )
𝑚𝑚=1
Step 4 : extract each feature as a hot vector or input neuron from 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡(𝑚𝑚)
using the below equation.
𝑛𝑛
Extracted_FeatureSetx[t……n] = ∑ (𝑡𝑡) 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 (m)
𝑥𝑥=1
Extracted_FeatureSetx[t] contains the feature vector of the respective domain.
Step 5 : Now map each test feature set to all respective training feature set
𝑛𝑛
𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤ℎ𝑡𝑡 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 (𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 || ∑ 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹[𝑦𝑦])
𝑖𝑖=1
Step 6 : Return Weight
32
JSPM NTC, Department of Computer Engineering 2021-22
33
9.1 Data Flow Diagrams
34
9.2 UML Diagrams
35
Figure 9.4 : Sequence Diagram
A use case diagram at its simplest is a representation of a user's interaction with the
system that shows the relationship between the user and the different use cases in
which the user is involved. A use case diagram can identify the different types of users
of a system and the different use cases and will often be accompanied by other types of
diagrams as well.
36
Figure 9.5 : Use Case diagram
37
Figure 9.6 : Activity Diagram
38
Figure 9.7 : Component Diagram
39
JSPM NTC, Department of Computer Engineering 2021-22
40
10.1 Results
The results of both neural network architectures, specifically, the ANN, are presented
and discussed in this section. After pre-processing, the ECG signals were fed into both
networks and, in the case of the CNN, after performing the filtrations and
transformation. The findings show how much a CCN may enhance a classification task
when compared to a normal feed forward neural net. The performance of the nets was
evaluated by repeating the whole training procedure ten times and evaluating after
each time. In this way, we may evaluate the mean value of each neural network's
ultimate performance.
Figure 10.1: The error of the training set of the artificial neural network
On the prediction set, after ten cycles of full training, the neural net had an average
accuracy of 86.4 percent for appropriate music selection. Figure 2 (Left Structure)
shows the accuracy of both the training and prediction sets in each run, with the
prediction set achieving the highest accuracy of 89.3 percent. Figures 2 and 3 show the
training set MSE and accuracy, respectively, across the 220 training epoc of the
highest accuracy run. The neural network had been learning at a decent pace until the
final cycles, when it stopped learning.
41
Figure 10.2: Accuracy of the training set of the artificial neural network.
Two artificial neural network topologies for ECG signal categorization were designed
and evaluated. For the ANN, input signals were supplied in time series format, while
for the Feed Forward ANN, input signals were fed in picture format. The first
technique clearly has difficulty generalizing what it has learned; however, the second
method generalized rather well, with an accuracy of 99.0 percent in the best run.
42
JSPM NTC, Department of Computer Engineering 2021-22
43
11.1 Conclusion
The Emotion detection will be done by using an EEG headset; the system
demonstrates the music and movie will be recommended according to the emotion of
the handicap people. Music and movies provide a soothing effect on human
temperament because of these, it will also reduce the stress level of the handicap
people. This study examines how single and machine learning applied classification
methods for taking the windowed data from four points on the scalp and quantifies that
data into an emotional representation of what was felt by the respondent at the time.
The comparisons combine that a low resolution, commonly produced EEG headband
can be efficient in categorizing the psychological response of a participant. The
possibilities for this are considerable product classification techniques with functional
value for the systems to help real-world decision taking. Emotional reactions states
should enhance engagement, especially for programs of mental health that contribute
to overall evaluation of the problems and how to solve such issues.
44
References
[1] Wei Tao, Chang Li, Rencheng Song, Juan Cheng, Yu Liu, Feng Wan, Xun Chen.
"EEG-based Emotion Recognition via Channel-wise Attention and Self Attention."
"Transactions on Affective Computing." IEEE 2020: 3025777
[2] MD. Rabiul Islam, Mohammed Ali Moni, MD. Milon Islam, MD. Rashed-Al-
Mahfuz, MD. Saiful Islam, MD.Kamrul Hasan, MD Sabir Hossain, Mohiuddin
Ahmad, Shahadat Uddin, AKM Azad, Salim A. Alyami, MD. Atiqur Rahman Ahad,
and Pietro Leo. "Emotion Recognition From EEG Signal Focusing on Deep Learning
and Shallow Learning Techniques." IEEE Access 2021: Vol 9: 94601-94624
[3] Shih-Hsiung Lee, Tzu-Yu Chen, Yu-Ting Hsien, and Lin-Roung Cao. "A music
recommendation system for depression therapy based on EEG." 2020 IEEE
International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan). IEEE,
2020.
[4] Esha Dutta, Ananya Bothra, Theodora Chaspari, Thomas Ioerger, Bobak J.
Mortazavi. "Reinforcement Learning using EEG signals for Therapeutic Use of Music
in Emotion Management." 2020 IEEE: 5553-5556
[5] Meetkumar Patel, Dr. Sharmishta Desai. "Entertainment suggestion for paralyzed
and bed rest patient using EEG Signals." International Engineering Research Journal
(IERJ) 2020: 327-330.
[6] Santamaria-Granados, Luz, Juan Francisco Mendoza-Moreno, and
Gustavo Ramirez-Gonzalez. "Tourist recommender systems based on emotion
recognition—a scientometric review." Future Internet 13.1 (2020): 2.
[7] Cunha, Joana, et al. "The Effect of Music on Brain Activity an Emotional
State." Engineering Proceedings 7.1 (2021): 19.
[8] Xu, Baoguo, et al. "Continuous Hybrid BCI Control for Robotic Arm Using
Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking."
Mathematics 10.4 (2022): 618.
[9] Jaafar, Sirwan Tofiq, and Mokhtar Mohammadi. "Epileptic seizure detection using
deep learning approach." UHD Journal of Science and Technology 3.2 (2019): 41-50.
[10] Kang, Dongwann, and Sanghyun Seo. "Personalized smart home audio system
with automatic music selection based on emotion." Multimedia Tools and Applications
78.3 (2019): 3267-3276.
[11] Dellaert, F.; Polzin, T.; Waibel, A. Recognizing emotion in speech. In Proceeding
of the Fourth International Conference on Spoken Language Processing, ICSLP’96,
Philadelphia, PA, USA, 3–6 October 1996; Volume 3, pp. 1970–1973.
[12] Mustaqeem; Kwon, S. A CNN-Assisted Enhanced Audio Signal Processing for
Speech Emotion Recognition. Sensors 2020, 20, 183.
[13] Zhao, J.; Mao, X.; Chen, L. Speech emotion recognition using deep 1D & 2D
CNN LSTM networks. Biomed. Signal Process. Control. 2019, 47, 312–323.
[14] Kalsum, T.; Anwar, S.M.; Majid, M.; Khan, B.; Ali, S.M. Emotion recognition
from facial expressions using hybrid feature descriptors. IET Image Process. 2018, 12,
1004–1012.
[15] Qayyum, H.; Majid, M.; Anwar, S.M.; Khan, B. Facial Expression
Recognition Using Stationary Wavelet Transform Features. Math. Probl. Eng. 2017,
2017.
45