My research is in natural language processing, the subfield of computer science that aims to enable computers to understand and produce human language. I focus mainly on language translation, and am interested in syntactic parsing and other areas as well.
An attentional model for speech translation without transcription. Long Duong, Antonios Anasatasopoulos, Trevor Cohn, Steven Bird, and David Chiang. To appear at NAACL HLT 2016.
Auto-sizing neural networks: with applications to n
-gram language models. Kenton Murray and David Chiang, 2015. In Proc. EMNLP
Supervised phrase table triangulation with neural word embeddings for low-resource languages. Tomer Levinboim and David Chiang, 2015. In Proc. EMNLP.
Model Invertibility Regularization: Sequence alignment with or without parallel data. Tomer Levinboim, Ashish Vaswani and David Chiang, 2015. In Proc. NAACL HLT
, pages 609–618. PDF
Multi-task word alignment triangulation for low-resource languages. Tomer Levinboim and David Chiang, 2015. In Proc. NAACL HLT
, pages 1221–1226. PDF
Improving word alignment using word similarity. Theerawat Songyot and David Chiang, 2014. In Proc. EMNLP
Kneser-Ney smoothing on expected counts. Hui Zhang and David Chiang, 2014. In Proc. ACL
, 765–774. PDF
Decoding with large-scale neural language models improves
translation. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David
Chiang, 2013. In Proc. EMNLP
, 1387–1392. PDF
Parsing graphs with hyperedge replacement grammars. With Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones and Kevin Knight, 2013. In Proc. ACL
, 924–932. PDF BibTeX