Towards fast and coherent long-form text generation
Презентація доповіді
Longform text generation, which involves building computational models to produce lengthy output texts (e.g., summaries, or translations of documents), is still in its infancy. In this talk, I focus on two key challenges with long-form text generation models: (1) the deterioration of their output quality as outputs become longer and more complex, and (2) their general memory and speed inefficiency, which causes latency issues when they are deployed in user-facing applications. Iframe these issues within a case study on computational story generation, a task for which my lab has successfully built and deployed a system to thousands of users. I will discuss specifics of our model architecture, training strategies to maximize memory use, and simplified model variants to increase inference speed.