Publications

Detailed Information

Enforcing Output Constraints via SGD: A Step Towards Neural Lagrangian Relaxation

Cited 0 time in Web of Science Cited 2 time in Scopus
Authors

Lee, Jay Yoon; Wick, Michael; Tristan, Jean-Baptiste; Carbonell, Jaime

Issue Date
2017
Publisher
Neural information processing systems foundation
Citation
6th Workshop on Automated Knowledge Base Construction, AKBC 2017 at the 31st Conference on Neural Information Processing Systems, NIPS 2017
Abstract
Structured prediction problems such as named entity recognition and parsing are crucial for automated knowledge base construction. Increasingly, researchers are exploring ways of improving them with neural networks. However, many structured-prediction problems require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search. Instead, for each input, we nudge continuous weights until the networks unconstrained inference procedure generates an output that satisfies the constraints. We apply our method to pre-trained networks of various quality for constituency parsing and find that in each case, not only does the algorithm rectify a vast majority of violating outputs, it also improves accuracy.
URI
https://hdl.handle.net/10371/201068
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • Graduate School of Data Science
Research Area Constraint injection, Energy-based models, Structured Prediction

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share