Title: Embarrassingly Simple Unsupervised Aspect Extraction
In this talk, he will discuss a new unsupervised model for aspect identification, a subtask of aspect-based sentiment analysis. The motivation behind an unsupervised method is that aspects are highly domain-specific, which makes it difficult to reuse models on other datasets. For example, an aspect extractor trained on the restaurant domain is unlikely to transfer well to the laptop domain. This simple model, which we call Contrastive Attention (CAt) only requires word embeddings and a set of POS tags, requires no training, and achieves state-of-the-art unsupervised performance on a corpus of restaurant reviews. He will introduce the model, explain why it works, and discuss some of its up- and downsides, as well as discuss a recent extension of the model.
Paper: https://www.aclweb.org/anthology/2020.acl-main.290.pdf
Bio: Stéphan Tulkens recently graduated from the University of Antwerp with a Phd on the topic of orthographic representation and word recognition. Aside from word reading, his research interests include Named Entity Recognition, Word Sense Disambiguation, Aspect-Based Sentiment Analysis, and weakly supervised or unsupervised learning. He currently works as a Machine Learning Engineer at Slimmer A.I., a Groningen-based company, where he makes AI products to reduce overhead in the scientific publishing industry.