Towards fast and coherent long-form text generation

Talk presentation

Longform text generation, which involves building computational models to produce lengthy output texts (e.g., summaries, or translations of documents), is still in its infancy. In this talk, I focus on two key challenges with long-form text generation models: (1) the deterioration of their output quality as outputs become longer and more complex, and (2) their general memory and speed inefficiency, which causes latency issues when they are deployed in user-facing applications. Iframe these issues within a case study on computational story generation, a task for which my lab has successfully built and deployed a system to thousands of users. I will discuss specifics of our model architecture, training strategies to maximize memory use, and simplified model variants to increase inference speed.

Mohit Iyyer
UMass Amherst
  • An assistant professor in computer science at UMass Amherst
  • Ph.D. in Computer Science
  • Was a researcher at AI2
  • Research interests lie broadly in natural language processing and machine learning
  • Best Long Paper Award, NAACL, 2018, 2016
  • Twitter, GitHub
Sign in
Or by mail
Sign in
Or by mail
Register with email
Register with email
Forgot password?