0% found this document useful (0 votes)
42 views1 page

IEEE Conference Template

The document discusses several approaches for emotion-based music recommendation using deep learning techniques. It proposes a system that uses a ResNet50v2 model trained on the fer2013 dataset to predict a user's emotion from their facial expression in an image, and recommends songs from the Spotify dataset that match their mood. The system aims to automatically select music based on a user's emotions to improve their listening experience.

Uploaded by

Raghu B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views1 page

IEEE Conference Template

The document discusses several approaches for emotion-based music recommendation using deep learning techniques. It proposes a system that uses a ResNet50v2 model trained on the fer2013 dataset to predict a user's emotion from their facial expression in an image, and recommends songs from the Spotify dataset that match their mood. The system aims to automatically select music based on a user's emotions to improve their listening experience.

Uploaded by

Raghu B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Emotion-Based Music Recommendation Using

Resnet50 v2 *
* Note: Sub-titles are not captured in Xplore and should not be used

Sheetal Lamani, Sakshi Prashant Heblikar, Ashok Donkanavar, ragu b


Electronics and Communications Eng, KLE Technological University
Email: [email protected], [email protected], [email protected]

Abstract—cont.... by sakshi Parul Tambe et al. [3] suggested an approach that automated
user-music player interactions by learning all of the user’s
I. INTRODUCTION
preferences, moods, and activities and recommending songs as
The use of deep learning models for emotion-based music a result. The device recorded users’ distinct facial expressions
recommendation has gained significant attention in recent to evaluate their emotions and determine the music genre.
years. These models aim to personalize music recommenda- Binbin Hu et al. [4] proposed a Markov Decision Process
tions based on the user’s current mood or emotion, which can model for music recommendation and considered the music
enhance their listening experience. In this project, we have recommendation a playlist recommendation task. They pro-
utilized the ResNet50v2 deep learning model to recognize posed RLWRec, a novel reinforcement learning-based model
emotions, and the fer2013 dataset to train the model. The fro exploting the optimal playlist approach.
fer2013 dataset is a widely used dataset for facial expression Deger Ayata et al. [5] provides a framework for emotion-
recognition, which contains images of faces labeled with based music recommendation that learns a user’s emotion
different emotions. using physiological data obtained via wearable sensors. A
Our proposed system utilizes facial expressions to detect a wearable computing device embedded with specific types of
person’s mood and recommends appropriate music from the sensors, namely galvanic skin response (GSR) and photo
Spotify dataset. By training the ResNet50v2 model on the plethys mography physiological(PPG) sensors, is used to clas-
fer2013 dataset, we can predict the user’s emotion from a given sify a user’s emotion.
image of their face, which can then be used to recommend Renata. L. Rossa et al.[6] suggested enhanced Sentiment
songs that match their mood and preferences. This system can Metric (eSM), a sentiment intensity-based music recommen-
be particularly useful for individuals who want to listen to dation system, which is a lexicon based sentiment metric com-
music that matches their current emotional state and mood. bined with a user-profile-based correction factor. Sentences
The goal of our project is to develop an emotion-based posted on social media are used to extract the sentiments of
music recommendation system that utilizes deep learning the people, and the music recommendation engine is run on
techniques to accurately predict a user’s mood and recommend mobile devices using a simple framework that recommends
appropriate music. By automating the music selection process songs based on the intensity of the current user’s emotions.
based on a user’s emotions, our proposed system eliminates ShanthaShalini et al. [7]. have proposed a dynamic mecha-
the need for manual efforts to create a playlist and segregate nism for music recommendations based on human emotions.
or group songs into different lists. Additionally, this system Songs for each emotion are trained based on each human
can be used to improve the listening experience of individuals listening behavior. Using a combination of feature extraction
suffering from depression, as it acts as a mood enhancer by and machine learning algorithms, the emotion of a real face
recommending appropriate music. is recognized. After the mood is determined from the input
II. LITERATURE SURVEY image, appropriate music for that mood is played to keep the
users entertained.
Renuka R et al. [1] suggested a model based upon changes
in the various face curvatures and the intensities of the pixels III. METHODOLOGY
associated with those curvatures. Artificial Neural Networks
5.1 Algorithm Used 5.1 Algorithm Used Convolutional
(ANN) were trained to define emotions.
Neural Network is a Deep Learning algorithm which can take
Zeng et al.[2] centered on a variety of methods for handling
in an input image, assign importance to various aspects/objects
audio and/or visual records of emotional state displays. Hap-
in the image and be able to differentiate one from the other.
piness, sorrow, fear, anger, disgust, and surprise are among the
CNNs have an input layer, an output layer, and hidden layers.
emotion categories represented by the effect. The paper gives
The hidden layers usually consist of convolutional layers,
a thorough analysis of audio/visual computing techniques.
ReLU layers, pooling layers, and fully connected layers. In
Identify applicable funding agency here. If none, delete this. a convolutional layer, neurons only receive input from a

You might also like