The figure below shows the average monolingual performance and the dimensionality of the different sentence embedding models that we tested.
Average word embeddings are a common baseline for more sophisticated sentence embedding techniques. However, they typically fall short of the performances of more complex models such as InferSent. Here, we generalize the concept of average word embeddings to power mean word embeddings. We show that the concatenation of different types of power mean word embeddings considerably closes the gap to state-of-the-art methods monolingually and substantially outperforms these more complex techniques cross-lingually. In addition, our proposed method outperforms different recently proposed baselines such as SIF and Sent2Vec by a solid margin, thus constituting a much harder-to-beat monolingual baseline. Our data and code are publicly available.
@article{rueckle-etal-2018-pmeans, title = {Concatenated Power Mean Word Embeddings as Universal Cross-Lingual Sentence Representations}, author = {R{\"u}ckl{\'e}, Andreas and Eger, Steffen and Peyrard, Maxime and Gurevych, Iryna}, journal = {arXiv}, year = {2018}, url = "https://arxiv.org/abs/1803.01400" }