Highlights of EMNLP 2016: Dialogue, deep learning, and more
This post discusses highlights of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). These include work on reinforcement learning, dialogue, sequence-to-sequence models, semantic parsing, natural language generation, and many more.
This post discusses highlights of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016).
This post originally appeared on the AYLIEN blog.
I spent the past week in Austin, Texas at EMNLP 2016, the Conference on Empirical Methods in Natural Language Processing.
There were a lot of papers at the conference (179 long papers, 87 short papers, and 9 TACL papers all in all) -- too many to read each single one. The entire program can be found here. In the following, I will highlight some trends and papers that caught my eye:
Reinforcement learning: One thing that stood out was that RL seems to be slowly finding its footing in NLP, with more and more people using it to solve complex problems:
- Hahn and Keller model human reading;
- Miao and Blunsom summarise sentences;
- Li et al. generate dialogues;
- He et al. predict reddit threads;
- Clark and Manning perform coreference resolution;
- Narasimhan et al. query the web for additional evidence to improve information extraction in one of the two best papers.
Dialogue: Dialogue was a focus of the conference with all of the three keynote speakers dealing with different aspects of dialogue: Christopher Potts talked about pragmatics and how to reason about the intentions of the conversation partner; Stefanie Tellex concentrated on how to use dialogue for human-robot collaboration; finally, Andreas Stolcke focused on the problem of addressee detection in his talk. Among the papers, a few that dealt with dialogue stood out:
- Andreas and Klein model pragmatics in dialogue with neural speakers and listeners;
- Liu et al. show how not to evaluate your dialogue system;
- Ouchi and Tsuboi select addressees and responses in multi-party conversations;
- Wen et al. study diverse architectures for dialogue modelling.
Sequence-to-sequence: Seq2seq models were again front and center. It is not common for a method to have its own session two years after its introduction (Sutskever et al., 2014). While in the past years, many papers employed seq2seq e.g. for Neural Machine Translation, some papers this year focused on improving the seq2seq framework:
- Wiseman and Rush extend seq2seq to learn global sequence scores;
- Yu et al. perform online seq2seq learning;
- Kim and Rush extend Knowledge Distillation (Hinton et al., 2015) to seq2seq learning;
- Kikuchi et al. propose a method to control the output length in seq2seq models.
Semantic parsing: Since seq2seq's use for dialogue modelling was popularised by Vinyals and Le, it is harder to get it to work with goal-oriented tasks that require an intermediate representation on which to act. Semantic parsing is used to convert a message into a more meaningful representation that can be used by another component of the system. As this technique is useful for sophisticated dialogue systems, it is great to see progress in this area:
- Yaruz et al. infer answer types for semantic parsing;
- Krishnamurthy et al. parse to probabilistic programs;
- Kocisky et al. perform semantic parsing with semi-supervised sequential auto encoders.
X-to-text (or natural language generation): While mapping from text-to-text with the seq2seq paradigm is still prevalent, EMNLP featured some cool papers on natural language generation from other inputs:
- Kiddon et al. map from a recipe name and ingredients to a recipe by ticking of items off a checklist;
- Ghazvininejad et al. generate a poem based on a topic;
- Monroe et al. map from a color to its name;
- Koncel-Kedziorski et al. re-write the theme of an algebra problem (think: boring physics book to Star Wars);
- Lebret et al. generate biographical sentences from Wikipedia fact tables.
Parsing: Parsing and syntax are a mainstay of every NLP conference and the community seems to particularly appreciate innovative models that push the state-of-the-art in parsing: The ACL '16 outstanding paper by Andor et al. introduced a globally normalized model for parsing, while the best EMNLP ‘16 paper by Lee et al. combines a global parsing model with a local search over subtrees.
Word embeddings: There were still papers on word embeddings, but it felt less overwhelming than at the past EMNLP or ACL, with most methods trying to fix a particular flaw rather than training embeddings for embeddings' sake. Pilevhar and Collier de-conflate senses in word embeddings, while Wieting et al. achieve state-of-the-art results for character-based embeddings.
Sentiment analysis: Sentiment analysis has been popular in recent years (as attested by the introductions of many recent papers on sentiment analysis). Sadly, many of the conference papers on sentiment analysis reduce to leveraging the latest deep neural network for the task to beat the previous state-of-the-art without providing additional insights. There are, however, some that break the mold: Teng et al. find an effective way to incorporate sentiment lexicons into a neural network, while Hu et al. incorporate structured knowledge into their sentiment analysis model.
Deep Learning: By now, it is clear to everyone: Deep Learning is here to stay. In fact, deep learning and neural networks claimed the two top spots of keywords that were used to describe the submitted papers. The majority of papers used at least an LSTM; using no neural network seems almost contrarian now and is something that needs to be justified. However, there are still many things that need to be improved -- which leads us to...
Uphill Battles: While making incremental progress is important to secure grants and publish papers, we should not lose track of the long-term goals. In this spirit, one of the best workshops that I've attended was the Uphill Battles in Language Processing workshop, which featured 12 talks and not one, but four all-star panels on text understanding, natural language generation, dialogue and speech, and grounded language. Summaries of the panel discussions should be available soon at the workshop website.
This was my brief review of some of the trends of EMNLP 2016. I hope it was helpful.
Cover image credit: Jackie Cheung