natural language processing
Posts about all areas of natural language processing.
Challenges and Opportunities in NLP Benchmarking
Over the last years, models in NLP have become much more powerful, driven by advances in transfer learning. A consequence of this drastic increase in performance is that existing benchmarks have been left behind. Recent models "have outpaced the benchmarks to test for them" (AI Index Report 2021), quickly reaching
ACL 2018 Highlights: Understanding Representations and Evaluation in More Challenging Settings
This post discusses highlights of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). It focuses on understanding representations and evaluating in more challenging scenarios.
NLP's ImageNet moment has arrived
Big changes are underway in the world of NLP. The long reign of word vectors as NLP's core representation technique has seen an exciting new line of challengers emerge. These approaches demonstrated that pretrained language models can achieve state-of-the-art results and herald a watershed moment.
Tracking the Progress in Natural Language Processing
Research in ML and NLP is moving at a tremendous pace, which is an obstacle for people wanting to enter the field. To make working with new tasks easier, this post introduces a resource that tracks the progress and state-of-the-art across many tasks in NLP.
Highlights of NAACL-HLT 2018: Generalization, Test-of-time, and Dialogue Systems
This post discusses highlights of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018). It focuses on Generalization, the Test-of-Time awards, and Dialogue Systems.
Word embeddings in 2017: Trends and future directions
Word embeddings are an integral part of current NLP models, but approaches that supersede the original word2vec have not been proposed. This post focuses on the deficiencies of word embeddings and how recent approaches have tried to resolve them.
Multi-Task Learning Objectives for Natural Language Processing
Multi-task learning is becoming increasingly popular in NLP but it is still not understood very well which tasks are useful. As inspiration, this post gives an overview of the most common auxiliary tasks used for multi-task learning for NLP.
Highlights of EMNLP 2017: Exciting datasets, return of the clusters, and more
This post discusses highlights of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). These include exciting datasets, new cluster-based methods, distant supervision, data selection, character-level models, and many more.