Corpus Analysis and Annotation for Helpful Sentences in Product Reviews


  •  Hana Almagrabi    
  •  Areej Malibari    
  •  John McNaught    

Abstract

For the last two decades, various studies on determining the quality of online product reviews have been concerned with the classification of complete documents into helpful or unhelpful classes using supervised learning methods. As in any supervised machine-learning task, a manually annotated corpus is required to train a model. Corpora annotated for helpful product reviews are an important resource for the understanding of what makes online product reviews helpful and of how to rank them according to their quality. However, most corpora for helpfulness are annotated on the document level: the full review. Little attention has been paid to carrying out a deeper analysis of helpful comments in reviews. In this article, a new annotation scheme is proposed to identify helpful sentences from each product review in the dataset. The annotation scheme, guidelines and the inter-annotator agreement scores are presented and discussed. A high level of inter-annotator agreement is obtained, indicating that the annotated corpus is suitable to support subsequent research.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1913-8989
  • ISSN(Online): 1913-8997
  • Started: 2008
  • Frequency: semiannual

Journal Metrics

WJCI (2022): 0.636

Impact Factor 2022 (by WJCI):  0.419

h-index (January 2024): 43

i10-index (January 2024): 193

h5-index (January 2024): N/A

h5-median(January 2024): N/A

( The data was calculated based on Google Scholar Citations. Click Here to Learn More. )

Contact