We investigate how signs and signed sentences are structured and interpreted across sign languages, and how new signs and structures emerge and evolve over time. We investigate Sign Language of the Netherlands, but also many other sign languages, including Filipino Sign Language, Catalan Sign Language, and Russian Sign Language.
We make use of traditional methods to collect and analyze data (video recording and manual annotation), but also develop new methods making use of computer vision and machine learning techniques.
Based on the patterns we find in the data, we develop theoretical models of how signed sentences are structured and how they are interpreted. These models make precise predictions, which are tested on further data, for instance from other sign languages. This allows us to refine our theoretical models further and further, and gain a better understanding of how sign languages, and human languages in general, work.
One of our current focus areas concerns the way in which questions are expressed in sign languages. It is known that signers often convey that they are asking a question by means of non-manual markers, for instance by raising their eyebrows. But little is known to date about the full range of non-manual markers that are used to express questions, and how exactly they differ in terms of discourse function. For instance, some non-manual markers might be used to express a neutral question (e.g., Have you met Susan before?) while others might be used to express biased questions (e.g., Haven't you met Susan before?). We are investigating how different kinds of questions are expressed across sign languages.
Grammaticalization is a process of language change, whereby lexical elements take on a grammatical function – think, for instance, of the English verb ‘to go’, which turned into a future tense marker (‘She is going to succeed’). Our research indicates that comparable phenomena exist in sign languages, and that many of the attested changes are actually modality-independent. In addition, however, sign languages have the unique potential to also grammaticalize manual and non-manual co-speech gestures, such as pointing and headshakes.
There is a heated debate in the literature on whether the spatial modulation of certain verbs can be described in terms of agreement. Clearly the use of space comes with modality-specific properties (e.g., the marker of a third-person subject on the verb is not stable, unlike English, where third person is always marked by the suffix -s: ‘She shout-s’). We have taken a position in this debate by arguing that (at least in German Sign Language) the spatial modulation can be accounted for in modality-independent ways, and should thus be classified as an instance of agreement.
Reduplication is a morphological process whereby a word or part of a word is repeated to mark a change in meaning. Across spoken languages, reduplication is commonly used to realize plurals or certain aspectual meaning (e.g., habituality) – and the same has been found to hold in many unrelated sign languages. Reduplication is interesting, as it is often iconic in both spoken and signed languages, in that the repetition signals a multitude of objects (plural) or a multitude of events (aspect). We are interested in the similarities and differences of various reduplication processes across the two modalities.