Aspect Based Sentiment Analysis Using Fine-Tuned BERT Model With Deep Context Features
Aspect Based Sentiment Analysis Using Fine-Tuned BERT Model With Deep Context Features
Corresponding Author:
Abraham Rajan
Department of CSE, Research Scholar, Christ University
Bangalore, India
Email: [email protected]
1. INTRODUCTION
Natural language processing (NLP) is a part of machine learning that provides the ability of computers
to learn and understand text, script, and words that are spoken. The understanding of these systems is in the
same way as human understanding. Sentiment analysis is a technology that is applied to unstructured texts,
where the sentiment in this information is extracted [1]. Sentiment analysis is a branch of natural language
processing that is mainly applied in the field of data mining and machine learning. This is applied widely in
news, politics, and the educational field [2]. The expanding social media platforms also increase the demand
and use of sentiment analysis worldwide. There are the same words that can be used in various contexts. This
challenging task of retrieving the sentiment information along with the use of similar words in different
contexts is achieved due to the growth of NLP over the years [3]. The detection of emotions and the sentiments
expressed in any written or spoken text is also referred to as open mining, which is termed sentiment analysis.
It is mainly bifurcated into sentiment information that is expressed as neutral, negative, or positive about any
statement [4]. There are also physiological risks that are present in various social media platforms, which can
be avoided with the use of sentiment analysis. Reviews that are collected from customers for various events
such as movies, restaurants, merchandise, foods, or various applications can be automatically detected with the
growth of sentiment analysis [5]. The classification of a document or text is done on the document level, this
level of classification is not applicable practically. Whereas the emotional classification is based on an aspect
level. This level of classification has more applications in the real-time world [6]. The level of subjective
information that flows on the web on various social media has a massive impact that leads to huge
consequences. The positive benefits of this textual information are the retrieval of information from these
reviews and comments that are posted on the web. The growth of a business, prediction of politics, education,
society, and medical fields relating to psychology. cause the need for this subjective information to be
automatically detected and segregated [7]. The emerging e-commerce platforms due to digitalization in every
field also pose a need for this methodology to be developed and improved over time. There occur comments
and reviews that may have both negative and positive polarity in them, whereas not all reviews need to be
negative or positive. The reviews can also be neutral. Hence, this requires the analyses to be performed on an
aspect level [8], several studies have been carried out to analyze and classify the sentiment based on aspects.
Figure 1 shows the general framework for aspect-based sentiment analysis.
Figure 1 shows the general framework of aspect-based sentiment analysis, it comprises five modules,
the first module is where the dataset is designed, in this case, review data is selected along with targeted aspects.
The second module includes the pre-processing phase; the third module includes the identification of targeted
aspects from given sentences or documents, the fourth module shows the analysis of sentiment and the fifth
module discusses the sentiment evaluation and classification into the different categories as positive, negative,
or neutral. Moreover, bidirectional encoder representation from transformer (BERT) [9] has been one of the
successful adoptions in NLP for sentiment analysis based on aspects. However, despite the effectiveness of the
BERT model, aspect-based sentiment analysis remains a major challenge in the real-time scenario based on
major three reasons. The first reason is that there is the enormous growth of social media data, which causes
substantial barriers as aspect level- sentiment adoption for a new domain, is challenging due to limited labelled
data. The second reason is existing BERT approach utilized a uniform model across domains such as
“appearance”, and “performance”, for the laptop dataset and “service”, “food”, and “price” in the case of the
restaurant domain. The third reason is the role of contextual information as it has been given little attention.
BERT model is designed for pre-training deep bidirectional representations through unlabeled text by joining
the conditioning on right context and left context in all layers, thus BERT model can be fine-tuned by adding
the additional layer for various ranges of tasks [10].
- Motivation and contribution
Aspect-based sentiment analysis is a fundamental task in SA, it is divided into two categories i.e.
aspect extraction and classification. Moreover, it refers to the identification of opinions or feelings about a
Aspect based sentiment analysis using fine-tuned BERT model with … (Abraham Rajan)
1252 ISSN: 2252-8938
particular entity. Rapid deployment in a neural network in recent years has shown great growth in deep learning
models, BERT model has proven to capture the word features of a particular word in various contexts.
However, the selection of an ideal number of a parameter is important for high accuracy, hence this research
work proposes fine-tuned BERT model for sentiment classification. Further, the contribution of this research
work has been highlighted here.
- This research work utilizes the BERT model and proposes the DC-BERT model for aspect-based sentiment
analysis; the DC-BERT model comprises fine-tuned BERT model, which improvises the traditional BERT
model; also, it introduces a deep context feature layer combined DC-BERT model.
- DC-BERT model is designed to extract two distinctive customized features; these two features includes a
deep understanding of context based on words and a general understanding of the sentence.
- Further, deep context features are adopted to understand the context of targeted aspects; the concatenation
layer is used for combining deep features and normal features to enhance the accuracy.
- DC-BERT model is evaluated considering the customer review dataset of laptops and restaurants from
SemEval 2014 task 4 considering precision, recall, accuracy, and macro-F1 score; also, comparative
analysis is carried out considering accuracy and macro-F1 score.
The organization of this research work is such that the first section discusses the sentiment analysis
background and the further section emphasizes the background of character and text recognition systems with
the aspect level sentiment information with their feature extraction process along with the motivation and
contribution to carry out this work. The second phase consists of the already existing methodologies along with
their shortcomings and various techniques that have been applied. The third section focuses on the development
of a model for the feature extraction and Network process. The fourth section contains the results that are
obtained from this study. This ends with a conclusion stating the outcome of this research.
2. LITERATURE SURVEY
In recent years, several mechanisms have been introduced for aspect-based sentiment analysis
(ABSA) tasks; in general, these methods are categorized into traditional machine learning approaches or neural
network-based approaches. This section of the research work discusses the related work of aspect-based
sentiment analysis and classification approach. The traditional approach of aspect-classification is mainly
based on the feature engineering (FE) which indicates the hefty amount of time is used for gathering and
analyzing the data, later features are designed based on the dataset characteristics, and further lexicons are
constructed. In the case of a traditional approach, it is quite difficult to design the features through a manual
process, and in case of change in dataset causes degradation in metrics performance, hence the neural network-
based approach is used for feature capturing without feature engineering. The sentiment analysis is performed
at an aspect level using the BERT model that is modified to predict the sentiment polarities [10]. Other than
sentiment analysis polarity, extra contextual information is also provided by this model. The methodologies
applied to sentiment analysis are discussed along with sarcasm analysis [11]. Various aspect level sentiment
analysis, dialogue generation along bias in the system of sentiment analysis is discussed. A convolutional
neural network (CNN) model is combined along with a bidirectional long short-term memory (Bi-LSTM)
model for the analysis of sentiment information from predefined structured datasets. The model proposed in
this paper focuses on aspect-level sentiment information [12]. Focuses on the use of recurrent models for
sentiment analysis since the use of word sequence for this analysis uses the information through sentiment
labels [13]. A graphical neural network is used for sentiment classification for dependency information that is
syntactic [14]. The textual information is represented in the form of a graphical tree, the similarities that are
present textually are plotted in a dependency graphically network. Segmentation of the text is first performed
which a basic task of natural language is processing [15]. The segmentation is performed based on the
document that is used as input for the model. The segmentation can take place from a sentence to sequence or
from document to document to sentence. After this, a recurrent neural network is applied for sentiment analysis
is applied to the segmented text on a sentence level. The sentiment analysis performed in this uses a
convolutional neural network based on lexicons [16]. The retrieval of information is performed using sample
sequence data of the system, this is termed a lexicon. An adaptive transfer network is used for sentiment
analysis on the aspect level [17]. This proposed model focuses on the relationships among multiple domains.
Sentiment analysis is performed based on a sentiment dictionary [18]. A sentiment dictionary is constructed
that includes different categories of sentiment words. A Bayesian classifier is used to determine the field of the
polysemic sentiment words. A convolutional neural network is combined with a bidirectional gated recurrent
units (GRU) for sentiment analysis [19]. This combination of the two models is used to extract the sentiment
features of the contexts. Previous works consider targeted aspects as auxiliary information or independent
information, which not only misses the context information of aspects but also restricts the metrics performance
like accuracy and macro-F1 score; hence, the research gap lies in obtaining the context information of targeted
aspects. Thus, this research work introduced deep context information alongside with BERT model to enhance
the model performance.
3. PROPOSED METHODOLOGY
Aspect-SC and aspect-SA are considered fine-grained NLP task and aims to predict the sentiment
polarities with given targeted aspects in particular sentences; also, BERT model has proven to be one of the
successful models for NLP-based tasks. BERT is a neural network-based mechanism for NLP; the BERT
model has two steps i.e. pre-training and fine-tuning. In the pre-training approach, the model is trained over
the pre-trained task on unlabeled data and in the case of fine-tuned; it is trained on labelled data with pre-
trained parameters. However, BERT alone fails to achieve high accuracy sentiment polarity detection as it fails
to understand context features in deep. Hence, this research work proposes fine-tuned BERT model along with
a deep context feature layer known as DC-BERT to enhance the metrics. Figure 2 shows the DC-BERT model.
Figure 2 shows the implemented design of the DC-BERT model, it includes two distinctive
embedding layers, these two layers are included for two customized feature extraction discussed; the first
customized feature is related to deep focus on words and the other one is a general focus on words or sentence.
At first glove, the model is adopted for embedding which tends to enhance the performance through a learning
process. In another embedding layer, the feature extraction layer along with fine-tuned BERT layer is carried
out; customized features along with introduced deep context layers are concatenated in the interaction layer to
achieve high performance.
Aspect based sentiment analysis using fine-tuned BERT model with … (Abraham Rajan)
1254 ISSN: 2252-8938
𝒵
𝑄𝐹𝑇𝐵 = 𝐹𝑖𝑛𝑒_𝑡𝑢𝑛𝑒_𝐵𝐸𝑅𝑇 𝒵 (𝑊 𝒵 )
𝒴
𝑄𝐹𝑇𝐵 = 𝐹𝑖𝑛𝑒_𝑡𝑢𝑛𝑒_𝐵𝑒𝑟𝑡 𝒴 (𝑊 𝒴 ) (2)
S, M, 𝑋 = 𝑓𝑥 (ZDPA ) (4)
S = ZDPA . 𝑁 𝑠
𝑓𝑥 (ZDPA ) = {M = ZDPA . 𝑁 𝑚 (5)
𝑋 = ZDPA . 𝑁 𝑥
𝒵 𝒵
ι
𝑄𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 = 𝐷𝐴𝒵 (𝑄ι ι )
𝒴
𝜄
𝑄𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 = 𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛𝒴 (𝑄𝜄𝒵 )
𝒵
𝑄𝜁𝒵 = 𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 𝒵 (𝑄𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛
ι
)
𝒴 𝒵
𝑄𝜁 = 𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 𝒵 (𝑄𝐷𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛
ι
) (8)
words, with the DCF layer, only the relevant words are masked and correlative among the aspect and less
relevant words are stored at the output. At first, the deep feature is set to null vectors another deep attention is
utilized to understand the context features, this design improvises the influence of less relevant context but
stores the correlation among aspects and less relevant context.
𝐺 𝐶𝐼𝑇𝑘 ≤ threshold
𝑋𝑘 = { (10)
𝑃 𝐶𝐼𝑇𝑘 ≤ threshold
𝒴
For instance, 𝑄ι is an output of a deep feature extractor, DCF focuses on the particular context
through designing mask vectors 𝑋0𝑜 , thus matrices 𝑀 as a mask are formulated as mentioned in (11). Also, with
DCF, output representation is given as (12). Further, the output representation of contextual features is attained
through the output of the DCF layer and computed as (13). Apart from deep features, the output representation
of normal features is given as (14).
𝒵 𝒵
𝑄𝐷𝐶𝐹 =. (𝑄𝑑𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 ). (𝑁) (12)
𝑄 𝒵 = 𝐶𝐼𝑇(𝑄𝐶𝐼𝑇
𝒵
) (13)
𝑄𝒴 = 𝐶𝐼𝑇(𝑄𝐷𝐶𝐹
𝒵
) (14)
𝑄𝒴𝒵 = [𝑄 𝒵 ; 𝑄𝒴 ]
𝒴𝒵
𝑄𝑒𝑛𝑐𝑜𝑑𝑒 = 𝑁 𝒴𝒵 . 𝑄𝒴𝒵 + 𝑐 𝑝𝑟𝑖𝑚_𝑠𝑒𝑐
𝒴𝒵 𝒴𝒵
𝑄𝐼𝐿 = 𝑑𝑒𝑒𝑝_𝑎𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑄𝑒𝑛𝑐𝑜𝑑𝑒 ) (15)
𝒴𝒵 𝒴𝒵
𝑄𝑝𝑜𝑜𝑙𝑖𝑛𝑔 = 𝑝𝑜𝑜𝑙𝑖𝑛𝑔(𝑄𝐼𝐿 ) (16)
𝒴𝒵 𝒴𝒵
𝒴𝒵 𝑄𝑝𝑜𝑜𝑙𝑖𝑛𝑔 𝑄𝑝𝑜𝑜𝑙𝑖𝑛𝑔
𝐴 = 𝑁𝐸𝐹(𝑄𝑝𝑜𝑜𝑙𝑖𝑛𝑔 ) = (𝑓 ) (∑𝑒𝑚=1 𝑓 ) (17)
4. PERFORMANCE EVALUATION
Sentiment analysis has drawn attention due to its broad application and the BERT model has proven
to analyze the sentence in a bidirectional manner. This section of the research evaluates the proposed model,
proposed model is designed through python as a programming language using spyder as IDE tools. The
proposed model is evaluated on the system with Windows 10 platform packed with 8GB RAM and 2GB of
compute unified device architecture (CUDA) enables NVIDIA graphics. To evaluate the model, accuracy and
macro-F1 score is considered as evaluation parameter, also comparative analysis is carried out with the existing
BERT model [20] to prove the model's efficiency.
4.2. Metrics
To evaluate DC-BERT, four distinctive metrics precision, recall, accuracy, and the macro-F1 score
are considered. These metrics are computed based on four-parameter true positive, true negative, false positive,
and false negative from the confusion matrix. Considering the same confusion matrix, the below metrics are
computed here.
i) Accuracy: Defined as the ratio of correctly classified sentiments from all the classified sentiments.
ii) Precision: Defined as the ratio of true positive towards the sum of true and false positive.
iii) Recall: Defined as the ratio of true positive towards the total sum of true positive and false negative.
iv) F1-score: Computed by observing the harmonic mean of recall and precision.
Aspect based sentiment analysis using fine-tuned BERT model with … (Abraham Rajan)
1258 ISSN: 2252-8938
- Deep mask memory network basessemantic dependency and context moment (DMNN-SDCM) [30]: This
technique is mainly based on the memory network and introduces deep mask-MN (memory network) along
with context moment, which provides the background knowledge of target aspects.
- Bidirectional encoder representations from transformers pre-training (BERT-PT) [31]: This technique
utilizes machine-reading comprehension and introduces review reading comprehension (RRC); also, post-
training approach is used to improvise the aspect knowledge.
- Attentional encoder network based bidirectional encoder representations from transformers
(AEN-BERT) [32]: This model uses an attention mechanism for modelling targets and context on the
trained BERT approach, it highlights the issue of regularization and label smoothing, and it aims to
minimize the fuzzy label consistency.
in Figure 4, the DC-BERT model achieves 7.73% improvisation in terms of accuracy and 12.34% of
improvisation in terms of macro-F1 score.
Although accuracy and macro-F1 give a major idea about classification model performance, there are
other metrics like precision and recall that have been ignored by various leading research works. Hence
considering Table 2, Table 3, Table 4, and Table 5, it is clear that LSTM based model achieves satisfactory
accuracy of 60 to 70 %, and other BERT adopted model achieves good accuracy of around eighty per cent. In
comparison with all these models, the proposed DC-BERT model outperforms the other model.
Aspect based sentiment analysis using fine-tuned BERT model with … (Abraham Rajan)
1260 ISSN: 2252-8938
5. CONCLUSION
Aspect-based-SA is considered a fine-grained task to analyze the user sentiment polarity towards
particular aspects; it provides valuable knowledge for both consumers and businesses. BERT has been proven
to perform well on several natural language processing (NLP) tasks including sentiment analysis and
classification. This research work introduces, the DC-BERT model, which improves the BERT model through
fine-tuned BERT layer, and a further deep context feature is introduced to enhance the model performance.
DC-BERT model extracts the customized features for a deep and better understanding of context based on
targeted aspects; these customized features are concatenated in interactive layers for output representation. DC-
BERT model is optimized to enhance the metrics on the given dataset. DC-BERT model is evaluated on the
review dataset of laptops and restaurants considering the accuracy and macro-F1 score metrics. Comparative
analysis of the PC-BERT model with the existing BERT model along with other baseline methods shows the
proposed model observes marginal improvisation in terms of accuracy and macro-F1 score. Hence DC-BERT
model is proven to achieve the highest metrics in comparison with other models till this research is carried out
which provides great scope for future sentiment analysis research. DC-BERT model is fine-tuned model which
improvises the metrics considering a particular dataset, however, is a real scenario sentence given can be
twisted and most of it could be related to sarcastic comments. Hence, future directions of our work would
concentrate on considering more datasets including sarcastic comments.
REFERENCES
[1] Y. Wang, G. Huang, J. Li, H. Li, Y. Zhou, and H. Jiang, “Refined global word embeddings based on sentiment concept for sentiment
analysis,” IEEE Access, vol. 9, pp. 37075–37085, 2021, doi: 10.1109/ACCESS.2021.3062654.
[2] G. Zhai, Y. Yang, H. Wang, and S. Du, “Multi-attention fusion modeling for sentiment analysis of educational big data,” Big Data
Min. Anal., vol. 3, no. 4, pp. 311–319, Dec. 2020, doi: 10.26599/BDMA.2020.9020024.
[3] W. Ali, Y. Yang, X. Qiu, Y. Ke, and Y. Wang, “Aspect-level sentiment analysis based on bidirectional-GRU in SIoT,” IEEE Access,
vol. 9, pp. 69938–69950, 2021, doi: 10.1109/ACCESS.2021.3078114.
[4] A. Nazir, Y. Rao, L. Wu, and L. Sun, “Issues and challenges of aspect-based sentiment analysis: a comprehensive survey,” IEEE
Trans. Affect. Comput., vol. 13, no. 2, pp. 845–863, Apr. 2022, doi: 10.1109/TAFFC.2020.2970399.
[5] K. C. Allen, A. Davis, and T. Krishnamurti, “Indirect identification of perinatal psychosocial risks from natural language,” IEEE
Trans. Affect. Comput., vol. 14, no. 2, pp. 1506–1519, Apr. 2023, doi: 10.1109/TAFFC.2021.3079282.
[6] T. Wang, K. Lu, K. P. Chow, and Q. Zhu, “COVID-19 Sensing: negative sentiment analysis on social media in China via BERT
model,” IEEE Access, vol. 8, pp. 138162–138169, 2020, doi: 10.1109/ACCESS.2020.3012595.
[7] L. Canales, W. Daelemans, E. Boldrini, and P. Martinez-Barco, “EmoLabel: Semi-automatic methodology for emotion annotation
of social media text,” IEEE Trans. Affect. Comput., vol. 13, no. 2, pp. 579–591, Apr. 2022, doi: 10.1109/TAFFC.2019.2927564.
[8] N. Zhao, H. Gao, X. Wen, and H. Li, “Combination of convolutional neural network and gated recurrent unit for aspect-based
sentiment analysis,” IEEE Access, vol. 9, pp. 15561–15569, 2021, doi: 10.1109/ACCESS.2021.3052937.
[9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language
understanding,” Comput. Sci. Comput. Lang., vol. 1, 2018, doi: https://doi.org/10.48550/arXiv.1810.04805.
[10] X. Li et al., “Enhancing BERT representation with context-aware embedding for aspect-based sentiment analysis,” IEEE Access,
vol. 8, pp. 46868–46876, 2020, doi: 10.1109/ACCESS.2020.2978511.
[11] S. Poria, D. Hazarika, N. Majumder, and R. Mihalcea, “Beneath the tip of the iceberg: current challenges and new directions in
sentiment analysis research,” IEEE Trans. Affect. Comput., vol. 14, no. 1, pp. 108–132, Jan. 2023, doi:
10.1109/TAFFC.2020.3038167.
[12] J. Zhou, S. Jin, and X. Huang, “ADeCNN: An improved model for aspect-level sentiment analysis based on deformable CNN and
attention,” IEEE Access, vol. 8, pp. 132970–132979, 2020, doi: 10.1109/ACCESS.2020.3010802.
[13] C. R. Aydin and T. Gungor, “Combination of recursive and recurrent neural networks for aspect-based sentiment analysis using
inter-aspect relations,” IEEE Access, vol. 8, pp. 77820–77832, 2020, doi: 10.1109/ACCESS.2020.2990306.
[14] X. Bai, P. Liu, and Y. Zhang, “Investigating typed syntactic dependencies for targeted sentiment classification using graph attention
neural network,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 29, pp. 503–514, 2021, doi:
10.1109/TASLP.2020.3042009.
[15] J. Li, B. Chiu, S. Shang, and L. Shao, “Neural text segmentation and its application to sentiment analysis,” IEEE Trans. Knowl.
Data Eng., vol. 34, no. 2, pp. 828–842, Feb. 2022, doi: 10.1109/TKDE.2020.2983360.
[16] N. K. Thinh, C. H. Nga, Y.-S. Lee, M.-L. Wu, P.-C. Chang, and J.-C. Wang, “Sentiment analysis using residual learning with
simplified CNN Extractor,” in 2019 IEEE International Symposium on Multimedia (ISM), IEEE, Dec. 2019, pp. 335–3353. doi:
10.1109/ISM46123.2019.00075.
[17] K. Zhang et al., “EATN: An efficient adaptive transfer network for aspect-level sentiment analysis,” IEEE Trans. Knowl. Data
Eng., vol. 35, no. 1, pp. 377–389, 2021, doi: 10.1109/TKDE.2021.3075238.
[18] G. Xu, Z. Yu, H. Yao, F. Li, Y. Meng, and X. Wu, “Chinese text sentiment analysis based on extended sentiment dictionary,” IEEE
Access, vol. 7, pp. 43749–43762, 2019, doi: 10.1109/ACCESS.2019.2907772.
[19] L. Yang, Y. Li, J. Wang, and R. S. Sherratt, “Sentiment analysis for E-Commerce product reviews in Chinese based on sentiment
lexicon and deep learning,” IEEE Access, vol. 8, pp. 23522–23530, 2020, doi: 10.1109/ACCESS.2020.2969854.
[20] Y. Chen, L. Kong, Y. Wang, and D. Kong, “Multi-grained attention representation with ALBERT for aspect-level sentiment
classification,” IEEE Access, vol. 9, pp. 106703–106713, 2021, doi: 10.1109/ACCESS.2021.3100299.
[21] M. Pontiki, D. Galanis, J. Pavlopoulos, H. Papageorgiou, I. Androutsopoulos, and S. Manandhar, “SemEval-2014 Task 4: aspect
based sentiment analysis,” in Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Stroudsburg,
PA, USA: Association for Computational Linguistics, 2014, pp. 27–35. doi: 10.3115/v1/S14-2004.
[22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi:
10.1162/neco.1997.9.8.1735.
[23] D. Tang, B. Qi, X. Feng, and T. Liu, “Effective LSTMs for target-dependent sentiment classification,” Comput. Sci. Comput. Lang.,
BIOGRAPHIES OF AUTHORS
Aspect based sentiment analysis using fine-tuned BERT model with … (Abraham Rajan)