Posts about cross-lingual learning, mainly using cross-lingual word embeddings.
Word embeddings in 2017: Trends and future directions
Word embeddings are an integral part of current NLP models, but approaches that supersede the original word2vec have not been proposed. This post focuses on the deficiencies of word embeddings and how recent approaches have tried to resolve them.
Highlights of EMNLP 2017: Exciting datasets, return of the clusters, and more
This post discusses highlights of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). These include exciting datasets, new cluster-based methods, distant supervision, data selection, character-level models, and many more.