The modeling of certain structures in natural language requires a mechanism for discontinuity, in the sense that we must account for two or more parts of the structure that are not adjacent. This is true across many languages and on different description levels. For instance, on the lexical level, this concerns discontinuous morphological phenomena such as transfixation (templatic morphology), as well as phrasal verbs, and non-contiguous multiword expressions. On the syntactic level, discontinuity is caused by phenomena such as extraposition and topicalization, or argument scrambling. Morphologically rich languages (MRLs) are particularly likely to exhibit such phenomena. Other examples include disfluency and anaphora/coreference resolution with discontinuous antecedents; modeling in both of the latter areas requires an extended domain of locality. On a higher level, discontinuity is a relevant factor in machine translation, as well as in complex question answering and in topic structure modeling. Discontinuity has been studied intensively in a range of different areas, including but not limited to grammar development, syntactic and semantic parsing, morphological analysis, machine translation, anaphora resolution, discourse modeling, automatic summarization and complex question answering.
Nevertheless, the treatment of discontinuous structures remains a challenge, because on the one hand, recovering of non-local information is generally associated with a high computational cost, and on the other hand, discontinuities are inherently a low-frequency phenomenon, which means that statistical approaches have a tendency to analyze them incorrectly as more frequent local phenomena. Additionally, it is not always clear if and how NLP tasks can benefit from knowing about discontinuity, that is, why one should care, particularly considering the given computational cost. The goal of this workshop is to bring together researchers from the different areas to give them a forum to exchange ideas and problem solutions, to create synergy effects, and to enable more powerful solutions. This encompasses not only linguistic analyses and work on analyzing or recovering the corresponding structures, such as, e.g., in non-projective dependency parsing, but also studies on “use cases”, which show how information about discontinuity can be used to enhance NLP tasks.
The areas of interest of this workshop include but are not limited to the following topics:
- Theoretical and empirical analyses of non-local/discontinuous phenomena.
- Comparisons of different descriptions of the same type of non-local information.
- Use, development, and comparison, of techniques for handling non-local/discontinuous within NLP tasks, especially wrt. to examples of NLP tasks which can benefit from handling discontinuous phenomena are machine translation, complex question answering, modelling of discourse, automatic summarisation and coreference resolution.
- “Use cases” that show how information about discontinuity can enhance an NLP task.
- Annotation of information about non-locality.
Our workshop highly values the open exchange of ideas, the freedom of thought and expression, and respectful scientific debate. We support and uphold the NAACL Anti-Harassment policy, and any workshop participant should feel free to contact any of the NAACL Board members or Priscilla Rasmussen, in case of any issues.