Date: 20 January 2014
Location: MC226
Time: 3pm
Abstract:
In the course of the last two decades, distributional vector space models of meaning have gained considerable momentum for semantic processing. Initially, these models only dealt with individual words, ignoring the context in which those words appear. More recently, two different but related approaches emerged that take into account the interaction between different words within a particular context. The first approach aims at building a joint, compositional representation for larger units beyond the individual word level. The second approach, different but related to the first one, computes the specific meaning of a word within a particular context. This presentation will look at a number of instantiations of these two approaches, and evaluate their strengths and limits for the representation of meaning in interaction.