[Paper Extensive Reading 16] By constructing auxiliary sentences, using BERT for aspect-based sentiment analysis

Post a summary post: Paper reading record

论文链接:《Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence》

1. Summary

Aspect-based sentiment analysis (ABSA) is a challenging subtask in sentiment analysis (SA), which aims to identify the polarity of fine-grained opinions for specific aspects. This paper constructs auxiliary sentences from aspects, and transforms ABSA into sentence-pair classification tasks, such as question answering (QA) and natural language inference (NLI). We fine-tuned the pre-training model of BERT and obtained the latest results on the SentiHood and SemEval2014 task 4 datasets.

2. Conclusion

An auxiliary sentence is constructed to transform (T)ABSA from a single sentence classification task to a sentence pair classification task. In the sentence pair classification task, the pre-trained BERT model was fine-tuned, and the latest results were obtained. The experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning are compared, the advantages of sentence pair classification are analyzed, and the effectiveness of the conversion method is verified.

Possible research directions:

  • This conversion method will be applied to other similar tasks.

Three, TABSA

targeted aspect-based sentiment analysis targeted aspect-based sentiment analysis

Aims to identify the polarity of fine-grained opinions on specific aspects associated with a given goal.

  • The first step is to determine the aspects associated with each goal;
  • The second step is to resolve the polarity of each aspect to a given goal.

Fourth, convert the TABSA task into a sentence pair task

Sentences for QA-M

  • We want to be a problem with the generated sentences from the goal aspect, and the format needs to be the same. For example, for the set of target aspect pairs (position 1, security), the sentence we generate is: "How do you think the security of position 1 is?"

Sentences for NLI-M

  • For the NLI task, the conditions we set when generating sentences are not so strict and the form is much simpler. The sentence created at this time is not a standard sentence, but a simple pseudo sentence. Take the (location-1, safety) pair as an example: the auxiliary sentence is: "location-1-safety".

Sentences for QA-B

  • For QA-B, we add label information to temporarily transform TABSA into a binary classification problem (label ∈ {yes, no}) to get the probability distribution. At this time, each target-direction pair will generate three sequences, such as "Position-1 direction safety polarity is positive", "Position-1 direction safety polarity is negative", "Position-1 direction safety polarity" Sex is none". We use the probability value of yes as the matching score. For the target aspect pairs that generate three sequences (positive, negative, and none), we take the sequence class with the highest matching score as the prediction class.

Sentences for NLI-B

  • The difference between NLI-B and QA-B is that auxiliary sentences have changed from interrogative sentences to pseudo sentences. The auxiliary sentences are: "location-1-safety-positive", "location-1-safety-negative", "location-1-safety-none".

In other words, QA generates sentences, NLI generates pseudo sentences; B is labeled.

Guess you like

Origin blog.csdn.net/qq_41485273/article/details/114016627