Plenary Lectures

 

Giosuè Baggio (SISSA, Trieste): "Following the Neural Footprints of Semantic Composition off the N400 Path"

The N400 is an evoked brain potential whose amplitude varies as an inverse function of the degree of semantic affinity between the eliciting word and the context in which it occurs. The N400 has become a prominent dependent measure of lexical semantic processing. However, there are reasons to believe there is more to meaning construction in the brain than is reflected by the N400. In this talk I will present research from our laboratory suggesting that certain kinds of semantic operation indeed result in brain potentials that differ from the N400 in latency, duration or scalp distribution, and that yet other types of semantic computation produce oscillatory effects with no counterpart in evoked responses. I will argue that following these other neural footprints of meaning composition may lead us to a vantage point from which a connection between neural data and formal semantic theory can be established in ways that may allow specific neural signatures to be assigned to particular formal operations. I will conclude presenting a model of semantic composition in the brain in which the N400 as well as other negative potentials can be accommodated.

 

Derek Ball (U St. Andrews): "Idealisation, Abstracta, and Semantic Explanation"

I present two arguments  -- one based on the role of abstracta, and the other based on the need for certain sorts of  idealisation -- that neither our best current semantic theories nor any forseeable development of them are true.  I then show how semantics can provide correct explanations of linguistic phenomena despite this fact, with special attention to the how facts about other forms of scientific representation (such as models and measurements) can illuminate semantics.

 

Emma Borg (U Reading): "Facing the Evidence: What can Empirical Facts Tell us about Semantics?"

On the one hand, it seems almost a truism that we want a semantic theory which is informed by empirical evidence. However, on the other, it proves notoriously difficult in practice to map between empirical claims and theoretical ones. This paper aims to clarify the potential problems in moving between theory and evidence in this area, in the hope of coming to relate experimental evidence and semantic theories more closely in the future. The first part of the paper surveys semantic theories in general, asking what a semantic theory is supposed to be a theory of and what kind of evidence might be relevant to what kind of semantic theory. Focusing on the psychological dimension of language, I then explore what might be involved in the claim that a semantic theory is cognitively real. The second part of the paper turns to the empirical evidence and I sketch three distinct types of evidence that might be relevant to a cognitively real semantic theory: neurophysiological evidence, behaviour in experimental tasks, and behaviour in non-experimental situations. However, as we will see, there turn out to be significant problems in mapping from each kind of evidence to semantic claims. Time allowing, in the third and final part of the paper I will demonstrate these problems by looking at some examples from recent work on the semantics/pragmatics border, asking how the connection between theory and evidence might be tightened.

 

Anna Borghi (U Bologna): Cancelled

 

Max Kölbel: "Making Sense of the Methods of Natural Language Semantics"

There are many ways of investigating language. I am here interested in the methods of what one might call "traditional natural language semantics": the pursuit of describing and examining formal languages in order to model certain aspects of natural languages in the tradition of Montague. This relatively young scientific pursuit seems to have become an institutionally recognized discipline within linguistics and to some extent within philosophy. Traditional natural language semanticists in this sense tend to follow a certain methodology: the data against which they seem to test their theories seem to be data concerning the conditions under which the utterance of a given sentence would be true, and perhaps also data concerning the felicity of certain sentences, and data concerning logical relations amongst sentences or amongst utterances of sentences. In most cases, traditional semanticists seem to obtain these data by simply "consulting their own linguistic judgement".

There is considerable unclarity and uncertainty about the object of study of natural language semantics, and the method of relying on one's own judgement has been questioned. In this paper I shall outline an account of the objects and methods of natural language semantics that vindicates to a large extent the customary methods of semanticists. I will argue that semantic theories model certain aspects of the competence of language users, that metaphysically speaking, competence is a kind of disposition, and that dispositions of this sort can legitimately be examined with the customary methods. As a by-product of this account, I shall also briefly provide an answer to Kripkenstein and address the issue of I- versus E- languages.

 

 

Manfred Krifka (ZAS, Berlin): "The Mass/Count distinction: Philosophical, linguistic, and cognitive aspects"

The distinction between mass nouns and count nouns (and related notions such as collectives, plurals and measure constructions) has played an important role in philosophy of language and ontology, in linguistics, and in cognitive studies. In this talk I will try to draw on these contributions and attempt to come up with a description of these notions.

 

Gina Kuperberg (MGH, Tufts U): "What can the study of schizophrenia tells us about the neural architecture of language processing?"

“I always liked geography. My last teacher in that subject was Professor August A. He was a man with black eyes. I also like black eyes. There are also blue and grey eyes and other sorts, too…” (Bleuler, 1911/1950).

This is an example of language produced by some patients with schizophrenia—a common neuropsychiatric disorder that affects 1% of the adult population.This type of disorganized speech is usually attributed to a ‘thought disorder’ or a ‘loosening of associations’, which influencesnot only the production of language but alsocomprehensionand other aspects of higher-order cognition in schizophrenia patients. It is usually assumed that thought disorderreflects aqualitative abnormality in theneurocognitivemechanisms engaged in language processing. The assumption is thathealthy individuals first retrieve the meaning of individual words, combine these words syntactically to form sentences, and then combine sentences with other sentences to construct whole discourse. In contrast, thought disorderhas often been viewed as aseparate disturbance of memory––stored associations between single words and whole events intrude upon normal language comprehension and production mechanisms.

Over the past ten years, our lab has carried out a series of cognitive neuroscience studies in both patient and control populations that challenge these assumptions.We are using multimodal neuroimaging techniques—event-related potentials, functional MRI and magneto-encephalography––to probe the time-course and neuroanatomical networks engaged in language comprehension. Our findings suggest that memory-based mechanisms play a much largerrole in normallanguage processing than has often been assumed.We are able to retrieve and mobilize stored semantic relationships between words and eventsvery quickly to facilitate the processing of upcoming words as language unfolds in real time. This facilitation manifestsas reduced activity within the anterior temporal cortex within 300ms afterthe onset of incoming words.Combinatorial mechanisms appear to beprolonged when bottom-up input conflicts with stored semantic knowledge, leading to the recruitment of left frontal and inferior parietal cortices past 500ms.

This type of language processing architectureoffers several advantages: it allows us toextract meaning from languagevery quickly, even in ambiguous and in noisy environments, and it explains how our language systems are dynamic and flexible enough to respond to ever-changing task and environmental demands. Italso can explain how language and thought can break down in disorders like schizophrenia: Seen within this broad framework, thought disorder does not reflect a qualitative abnormality in how language is processed; rather, it is best conceptualized asreflecting an imbalance of a tight reciprocal relationship between the memory-based and combinatorial neurocognitive mechanismsthatconstitute normal language processing.In this way, the study of how language breaks down in neuropsychiatric disorders like schizophrenia can give important insights into the architecture of the normal language processing system.

 

Ira Noveck (CNRS, Lyon): "Finding Consistency among Pragmatic Inferences"

In this talk, I will take a careful look at conditional statements in order to defend a deflationary account of the generally accepted notion of invited inferences (see Noveck, Bonnefond & Van der Henst, 2012; Geis & Zwicky, 1971).  This will provide a springboard for a view on pragmatic processing that I will refer to as narrowing, which is a way to gain information by refining aspects of the linguistic code (Noveck & Spotorno, in press). While assuming that this process is ubiquitous, I will rely on experimental data in order to detail a set of phenomena that fall under this category more generally.  This process can be further subdivided into voluntary and imposed narrowing, with scalar inferences serving as the flagship example of the former (see [1b] as a derivation of the utterance in [1a]) and with metaphor being exemplary of the latter (see [2b] as the derived interpretation of [2a]).

 

(1)       a. Some of the guests are hungry.

            b. Some but not all of the guests are hungry.

 

(2)       a. Nobody wanted to run against John at school. John was a cheetah.

       b. John was fast.

 

Those called voluntary have linguistically encoded readings that can lead to a more informative reading with extra effort. For those call imposed there is no obvious relationship between linguistic readings and their intended ones as the latter are forced on the listener. I will then return to conditionals and show how an invited inference is of the imposed sort.

 

Paul Pietroski (U Maryland): "What is a Theory of Human (Linguistic) Understanding?"

Following Dummett and others, I think that a good theory of meaning for a naturally acquirable human language L will be theory of understanding for L, and that a good theory of understanding for L may not be theory of truth for L. One need not endorse any traditional form of verificationism to think that the natural phenomenon of linguistic understanding reflects the nature of human psychology--and how our "faculty of language" interfaces with other cognitive systems--in ways that are not captured, and perhaps distorted, by "truth conditional semantics." In the talk, I will review some experimental work (done with colleagues in Maryland) that suggests a connection between understanding and verification, though not the kind of connection that many philosophers have tried to establish on a priori grounds. As time permits, I will link this work to the more general idea that meanings are instructions for how to build concepts of a special sort.

 

Michiel van Lambalgen (U Amsterdam): "Processing discourse: where logic meets neuroscience"

The task of formal semantics is generally taken to be to account for entailment relations between sentences; it is not claimed that formal semantics yields insight into the cognitive processes involved in language comprehension and production.
Here and in the talk by Giosuè Baggio a more ambitious program will be outlined, in which (i) formal semantics can be used to derive predictions about EEGs recorded during discourse comprehension, and (ii) observed EEGs constrain semantic theorising. The main example will be discourse consisting of arguments involving  indicative conditionals, for which probabilistic analyses have recently become popular.
We will investigate the electrophysiological predictions made by probabilistic and logical accounts of the condition, and will report on an experiment testing these predictions.

 

Markus Werning: "Semantics naturalized: Between mental symbols and embrained simulations"

The principle of compositionality is pivotal to any theory of meaning and amounts to a homomorphism between syntax and semantics. It is often associated with the idea of a correspondence between syntactic and semantic part-whole  relations, but - as will be shown - logically distinct therefrom. The idea of a part-whole correspondence between syntax and semantics is characteristic only for symbolic theories of meaning. In this talk a neurobiologically motivated theory of meaning as internal representation is developed that holds on to compositionality, but is non-symbolic. The approach builds on neurobiological findings regarding topologically structured cortical feature maps and the a proposed neural mechanism of object-related binding. It incorporates the Gestalt principles of  psychology and is implemented by recurrent neural networks. The semantics to be developed is structural analogous to model-theoretical semantics, which likewise is compositional and non-symbolic. However, unlike model-theoretical semantics, it regards meanings as set-theoretical constructions not of denotations, but of their neural counterparts, their emulations. The semantics to be developed is a neuro-emulative model-theoretical semantics of a first order-language.