New Frontiers in Summarization Workshop

The Third NewSum Workshop

The Third Workshop on “New Frontiers in Summarization workshop” aims to provide a research forum for cross-fertilization of ideas in automatic summarization and related areas. This includes discussion on novel paradigms/frameworks, shared tasks of interest, information integration and presentation, applied research and applications, and possible future research foci. The workshop will pave the way towards building a cohesive research community, accelerating knowledge diffusion, developing new tools, datasets and resources that are in line with the needs of academia, industry, and government.



Overview

New advances in natural language processing (e.g., pre-trained models with self-supervised objectives) have resulted in state-of-the-art performance according to existing standards of summarization evaluation, by effectively exploiting large-scale datasets and superior computing power. This progress now needs to be complemented in at least two ways. On the one hand, the summarization of large amounts of multimodal data (text + others) needs more sophisticated abstraction capabilities, better integration of abstraction and extraction, more flexible language generation, as well as the ability to combine language with information visualization. On the other hand, to assess the quality of such system summaries, more comprehensive evaluation metrics are needed which correlate more tightly with human judges or extrinsic task-based performance. For example, an important research topic which has emerged in the past two years is to ensure that the output summarization is factually consistent with the source text (Kryscinski et al., 2019), which has resulted in new summarization evaluation techniques focused on correctness. Both of these pillars will be crucial for realistic, ecologically valid deployment of summarization research.

The goal for this workshop is to provide a research forum for cross-fertilization of ideas. We seek to bring together researchers from a diverse range of fields (e.g., summarization, visualization, language generation, cognitive and psycholinguistics) for discussion on key issues related to automatic summarization. This includes discussion on novel paradigms/frameworks, shared tasks of interest, information integration and presentation, applied research and applications, and possible future research foci. The workshop will pave the way towards building a cohesive research community, accelerating knowledge diffusion, developing new tools, datasets and resources that are in line with the needs of academia, industry, and government kurs carbios



Call for Papers

Both long paper (up to 8 pages with unlimited reference) and short paper (up to 4 pages with unlimited reference) are welcomed for submission!

A list of topics relevant to this workshop (but not limited to):

  • Abstractive and extractive summarization
  • Language generation
  • Multiple text genres (News, tweets, product reviews, meeting conversations, forums, lectures, student feedback, emails, medical records, books, research articles, etc)
  • Multimodal Input: Information integration and aggregation across multiple modalities (text, speech, image, video)
  • Multimodal Output: Summarization and visualization + interactive exploration
  • Tailoring summaries to user queries or interests
  • Semantic aspects of summarization (e.g. semantic representation, inference, validity)
  • Development of new algorithms
  • Development of new datasets and annotations
  • Development of new evaluation metrics
  • Cognitive or psycholinguistic aspects of summarization and visualization (e.g. perceived readability, usability, etc)



Submission Instructions

You are invited to submit your papers in our START/SoftConf submission portal. All the submitted papers have to be anonymous for double-blind review. The content of the paper should not be longer than 8 pages for long papers and 4 pages for short papers, strictly following the EMNLP 2021 style templates. Supplementary and appendices (either as separate files or appended after the main submission) are allowed. We encourage code link submissions for the camera-ready version.

NewSum 2021 will allow double submission as long as the authors make a decision before camera-ready. We will not consider any paper that overlaps significantly in content or results with papers that will be (or have been) published elsewhere. Authors submitting more than one paper to NewSum 2021 must ensure that their submissions do not overlap significantly (>25%) with each other in content or results. Authors can submit up to 100 MB of supplementary materials separately. Authors are highly encouraged to submit their codes for reproducibility purposes.

Note: The submission portal has been opened.



Important Dates:

All deadlines are 11.59 pm UTC -12h (“anywhere on Earth”).

  • Anonymity period begins: July 28, 2021
  • Submission Deadline: September 3, 2021 (extended from August 28, 2021)
  • Acceptance Notification: September 28, 2021 (postponed from September 27, 2021)
  • Camera-Ready Submission: September 30, 2021
  • Workshop Date: November 10, 2021



Organizers

Lu Wang

Wang Lu
Northeastern University, USA

Fei Liu

Fei Liu
University of Central Florida, USA

Yue Dong

Yue Dong
McGill University & MILA, Canada

Giuseppe Carenini

Giuseppe Carenini
University of British Columbia, Canada

Jackie Chi Kit Cheung

Jackie Chi Kit Cheung
McGill University & MILA, Canada



Confirmed Spearkers

Shashi Narayan

Shashi Narayan
Google
[Talk Slides]

Asli Celikyilmaz

Asli Celikyilmaz
Facebook AI Research

Sebastian Gehrmann

Sebastian Gehrmann
Google
[Talk Slides]





Schedule

NewSum 2021 schedule (9am - 6pm AST)

Technical Committee

  • Enamul Hoque (York University)
  • Jiacheng Xu (The University of Texas at Austin)
  • Rui Zhang (Penn State University)
  • Hou Pong Chan (University of Macau)
  • Yuntian Deng (Harvard University)
  • Kristjan Arumae (Amazon)
  • Xiaojun Wan (Peking University)
  • Chris Kedzie (Rasa Technologies Inc.)
  • Naoaki Okazaki (Tokyo Institute of Technology)
  • Manabu Okumura (Tokyo Institute of Technology)
  • Yang Liu (Microsoft)
  • Tadashi Nomoto (National Institute of Japanese Literature)
  • Linzi Xing (University of British Columbia)
  • Ari Rappoport (Hebrew University)
  • Felice Dell'Orletta (Istituto di Linguistica Computazionale "A. Zampolli" (CNR), Pisa, Italy)
  • Margot Mieskes (University of Applied Sciences Darmstadt, Germany)
  • Rodrigo Souza Wilkens (University of Essex)
  • Maxime Peyrard (EPFL)
  • Benoit Favre (Aix-Marseille University LIS/CNRS)
  • Tobias Falke (Amazon)
  • Thiago Alexandre Salgueiro Pardo (University of São Paulo)
  • Jessica Ouyang (University of Texas at Dallas)
  • Wencan Luo (Google)
  • Florian Boudin (Université de Nantes - France)
  • Juan-Manuel Torres-Moreno (LIA Avignon Université)
  • Michael Elhadad (Ben Gurion University)
  • Esaú Villatoro Tello (Universidad Autónoma Metropolitana Unidad Cuajimalpa, México)
  • Yuning Mao (University of Illinois at Urbana-Champaign)
  • Wen Xiao (University of British Columbia)
  • Xinyu Hua (Northeastern University)
  • Patrick Huber (University of British Columbia)
  • Abram Handler (University of Colorado)
  • Wojciech Kryściński (Salesforce Research)
  • Alexander Fabbri (Yale University)
  • Greg Durrett (UT Austin)
  • Yang Gao (Royal Holloway, University of London, UK)
  • Ramakanth Pasunuru (UNC Chapel Hill)
  • Ido Dagan (Bar-Ilan University)