|
|
|
|
09:00 - 09:10
|
Open remarks
NewSum Organizers
|
|
|
|
|
09:10 - 10:00
|
Keynote I -
Sashi Narayan (Google)
Learning from Past: Bringing Planning Back to Neural Generators
Traditional NLG systems in Reiter and Dale’s vision were inherently grounded and controllable, thanks to a planning stage which played a crucial role in ordering and structuring the information, and in grounding the generation of text to the plan. Modern neural generation
systems have advanced NLG beyond our imagination, yet some of the most desired properties such as grounding and controllability have been lost and are still to be mastered. In this talk, I will discuss why we need to bring back planning to neural generation
and to make generation systems more grounded, controllable, inspectable and trustworthy. I will present several pieces of evidence supporting this direction exploring existing work in data-to-text and story generation, and in summarization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11:00 - 11:50
|
Keynote II -
Sebastian Gehrmann (Google)
Breaking News: It’s time to fix the evaluation of generated text
Language generation has undergone multiple paradigm shifts from constructed grammars and modular systems toward end-to-end supervised (neural) approaches, and now, almost every system is built on pretrained models. As a result, how generated text looks has changed a
lot; it is now much more fluent and most of its issues relate to its content. Yet, we still use the same metrics, some of the same corpora, and how to conduct human evaluations remains a mystery. Throughout this talk, we will explore many examples of broken
evaluations in summarization and other generation applications. I will discuss the implications that broken evaluation pipelines have on model development and the overall progress in the field. And I will show some promising results on developing evaluation
suites, learned metrics, and meta-evaluations that have the potential to improve how generated text is evaluated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13:00 - 13:50
|
Keynote III -
Asli Celikyilmaz (Facebook AI Research)
Tune in To Your Language Model for Better Text Generation
With today’s neural language models, we can teach computers to summarize online meetings, write creative stories or articles about an event, hold longer conversations in customer-service applications, chit-chat about daily activities with individuals, describe pictures to visually impaired, to name a few. In this talk, I will discuss challenges and shortcomings of building such systems with the current neural text generation models focusing on issues relating to collecting and annotating training datasets and building new architectures to model the intrinsic structure of conversations. I will present our recent approaches that imbue transformer based neural generators with structural representations by way of implicit memory architectures and latent structural embeddings. I will conclude my talk pointing to avenues for future research.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14:00 - 14:10
|
Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization
Dongyub Lee1, Jungwoo Lim2, Taesun Whang3, chanhee lee2, Seungwoo Cho4, Mingun Park5, Heuiseok Lim2
1Kakao Corp, 2Korea University, 3Wisenut Inc., 4Kakao Enterprise at South Korea, 5Microsoft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15:30 - 15:35
|
SUBSUME: A Dataset for Subjective Summary Extraction from Wikipedia Documents
Nishant Yadav1, Matteo Brucato2, Anna Fariha2, Oscar Youngquist2, Julian Killingback3, Alexandra Meliou4, Peter Haas2
1UMass Amherst, 2University of Massachusetts Amherst, 3The University of Massachusetts Amherst, 4University of Massachusetts, Amherst
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15:55 - 16:00
|
"Let Your Characters Tell Their Story'': A Dataset for Character-Centric Narrative Understanding
Faeze Brahman1, Meng Huang2, Oyvind Tafjord3, Chao Zhao4, Mrinmaya Sachan5, Snigdha Chaturvedi6
1UC Santa Cruz, 2University of Chicago, 3AI2, 4University of North Carolina at Chapel Hill, 5ETH Zurich, 6University of North Carolina, Chapel Hill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|