Syntax in the media, and the role of syntax research in cognitive science more broadly

Syntax in the media, and the role of syntax research in cognitive science more broadly

In recent years, researchers have been discussing why it is that research in syntax / semantics isn’t as prominent in the field of cognitive science as it used to be, and why research in syntax / semantics isn’t prominent in the public discourse on science more generally.  With respect to the latter point, David Pesetsky gave a plenary talk at the LSA in 2013 (http://web.mit.edu/linguistics/people/faculty/pesetsky/Pesetsky_LSA_plenary_talk_slides_2013.pdf) where he worried about the state of the field of linguistics as portrayed in the media.  In particular, he worried that syntactic and other linguistics research is not generally known by the public.

Why don’t people in the general public know much about linguistics?  And why isn’t syntax / semantics part of the core of psychology as it seemed to be 40 years ago?

Here, I suggest that perhaps these two issues are related: maybe research in syntax / semantics isn’t prominent in popular current culture for similar reasons that it’s not prominent in cognitive science.  I think that there are several reasons for this lack of prominence:

(A) Inadequate presentation thus far, not suitable for a general audience. I think it is uncontroversial to say that researchers in the fields of syntax / semantics do not generally explain their fields well to people without linguistics training.  Perhaps an underlying cause for this issue is a lack of emphasis in many linguistics departments on teaching and presenting work to non-specialists. This feels different from my experience in MIT’s Department of Brain and Cognitive Sciences, where most faculty that I know think that it is our job not only to do cutting-edge research, but also to explain it to any smart person, no matter what their background.

(B) The complexity of the hypotheses, relative to the quantity of data that are being explained.  Some syntactic claims are complicated, and rely on a lot of theoretical background. More complicated hypotheses need more data to be convincing (just as in any other area of science). Some hypotheses may not be interesting to the general public, or even to cognitive scientists, if they are seen as theoretically heavy, such that the ratio of theory to data is poor.

(C) Perhaps the lack of prominence is partially due to the fact that the fields of syntax and semantics are based on weak quantitative standards.  My colleagues Ev Fedorenko, Steve Piantadosi and I have discussed the issue of weak quantitative standards in syntax and semantics in some detail in published work (see the bottom of this post for some papers), but some of our writing appears to be misunderstood by at least some people.

One place to start is with what Peter Hagoort said about quantitative linguistics research in his post on our blog http://thescienceoflanguage.com/2015/02/16/linguistics-quo-vadis-an-outsider-perspective/

And with Norbert Hornstein’s response:

http://facultyoflanguage.blogspot.com/2015/02/the-future-of-linguistics-two-views.html

(By the way, I agree with much of what Peter wrote but not all of it. Perhaps I will discuss what I don’t agree with in a later post.)

One of Norbert’s comments on Peter’s post is as follows:

“The third problem he identifies concerns the methodological standards for evidence evaluation in linguistics. He believes that current linguistic methods for data collection and evaluation are seriously sub-par. More specifically, our common practice is filled with “weak quantitative standards” and consists of nothing more than “running sentences in your head and consulting a colleague.” I assume that Hagoort further believes that such sloppiness invalidates the empirical bases of much GG research.

Sadly, this is just wrong. There has been a lot of back and forth on these issues over the last five years and it is pretty clear that Gibson and Fedorenko’s worries are (at best) misplaced. In fact, Jon Sprouse and Diogo Almeida have eviscerated these claims (see here, here, here and here).”

Unsurprisingly, I argue that Norbert’s interpretation of Sprouse and Almeida’s results is incorrect.  I think that Gibson and Fedorenko’s worries are indeed worth paying close attention to.

Three reasons to do quantitative experiments in syntax / semantics

First, let’s summarize the evidence, which I hope we can all agree on: Sprouse, Schütze & Almeida (2013) show that in a random sample of 146 LI judgments that around 5-10% of the contrasts are either non-existent, in the opposite direction, or so small that we don’t want to conclude anything from them.  Mahowald et al. (submitted; http://web.mit.edu/kylemaho/www/SNAP.pdf) replicate this number on 100 different LI contrasts from the same set of years: about 90-95% right based on a forced choice task, with 5-10% errors, depending on how conservative one is with deciding on significance levels. (In a different paper, Sprouse & Almeida (2011) show that a greater percentage of the judgments from Adger’s textbook also seem to be right.  More on this later.)

Sprouse et al. have argued that these results show that not doing quantitative work is ok, because 5% errors is ok.  On the contrary, we argue that there is a real problem, for at least the following three reasons:

(1) Not knowing which 5% are mistakes: If you don’t do the experiment, you never know which 90-95% are ok. This is a serious problem. This means that some fraction of the critical contrasts for your theories are probably wrong, but you don’t know which ones. This problem is even more severe if you don’t speak the language: then you don’t even have your own intuitions about which judgements are probably right, and which ones might be questionable or wrong.

(2) Effect sizes: you get no effect size information without a quantitative experiment. Sprouse et al. and Mahowald et al. show that the effect sizes are clearly in a continuum, from non-existent to small to huge. The notion of grammaticality presupposes some threshold (between “grammatical” and “ungrammatical”) that probably isn’t there. In real language, the effects are probabilistic. It’s impossible to find this out from an armchair experiment, if one presupposes a threshold between grammatical and ungrammatical.

(3) Relative judgements across many sentence types: without quantitative methods, you can’t compare judgments across experiments. So even if sentence a is better than sentence b, you won’t have judgment data for the comparisons of a and b relative to many other structures, without a quantitative experiment.  Gibson, Piantadosi & Fedorenko (2013) (http://tedlab.mit.edu/tedlab_website/researchpapers/Gibson_et_al_2013InPress_LCP.pdf) make this point in some detail. In that paper, we ran some simple acceptability experiments showing that sentences which are “ungrammatical” relative to some control are still a lot better than others that are “grammatical” relative to some other control.  In particular, we showed that the extraction of an NP goal is worse than the extraction of a PP object:  (I put the judgement that some linguists have given in parentheses)

(13) (numbers from the paper)

a. (ok) Joyce tried to remember who Donovan tossed a ball to.

b. (*) Joyce tried to remember who Donovan tossed a ball.

(Simpler versions of this contrast can be compared, like “Who did Donovan toss a ball to?” vs. “Who did Donovan toss a ball?”.  We made these versions embedded clauses to make them more similar (and hence more comparable) to the examples in (11).)

And there may be a small effect such that having an extra wh-phrase may make sentences with multiple-wh-extractions (which include an object-extraction) better:

(11)

a. (*) Julius tried to remember what who carried.

b. (ok) Julius tried to remember what who carried when.

We show that there is a difference between 13a and 13b, and maybe a tiny difference between 11a and 11b (such that 11a is a little worse; but this isn’t actually quite significant in our data), but critically, 13a and 13b are much better than 11a or 11b.  Remember that 11a is supposedly “grammatical” and 13b is “ungrammatical”, so this is a problem.  Without doing the experiments, one can’t know this.

Shalom Lappin and his colleagues have shown something similar for the examples in the Adger syntax textbook: although the relative judgments between the examples are usually right (as shown by Sprouse & Almeida), there are lots of examples of “grammatical” examples that are worse than examples of “ungrammatical” examples in other comparison pairs.

So, it seems clear that quantitative experiments are useful, and can move the field forward in a productive direction. Indeed, I was at a meeting not long ago attended by Jon Sprouse, Diogo Almeida, Carson Schutze and Colin Phillips (among others), and I think that they all agreed with the points above.

Other responses to the plea for quantitative research

Colin Phillips and others (e.g., “Todd”, commenting on Peter Hagoort’s earlier post) have said that they have never heard anyone from a neighboring field (such as cognitive science or neuroscience) say that they don’t pay attention to syntax papers because there are no quantitative methods in them.  Todd puts it like this:

I’ve never heard a psychologist say, “You know what, I’d believe in {prosodic feet | little v | quantifier raising | …} except that I really don’t trust that judgment in sentence (12).” or anything of the sort.”

But in what context would a psychologist / neuroscientist say something like this? They might think something like this, but never tell you, especially if they don’t want a conflict with you. They might even be really interested in linguistic questions, but not like the reasoning, and go to work in another (maybe related) field. My ex-student Steve Piantadosi (one of the co-hosts of this blog) was a linguistics major for his undergraduate degree, but didn’t apply to linguistic graduate programs because he thought their methods (and theories derived from them) were huge leaps. So the fact that most people don’t say things like this doesn’t mean they aren’t thinking them. In addition, I know a lot of psychologists / cognitive scientists / neuroscientists, and when I have engaged them on linguistic questions, they often say things like the above. In fact, I have heard many people say something like Todd’s statement, but with stronger consequences: “I am dubious of claim x because I am dubious about the data that this hypothesis is based on.”

In addition, maybe researchers in other fields are concerned with issue (B), from the top of this posting: the complexity of the hypotheses, relative to the quantity of data that are being explained.  Maybe a hypothesis seems too complicated given the data that are presented.

Returning finally to the issue at hand: Why don’t people in the general public know much about linguistics? David Pesetsky suggests that the solution to this problem is education. I certainly agree that education for the general public about language research would be good. I look forward to see what form that education takes over the coming years.

——

Some of our papers on quantitative research in syntax / semantics:

http://tedlab.mit.edu/tedlab_website/researchpapers/Gibson_&_Fedorenko_2010_TiCS.pdf

http://tedlab.mit.edu/tedlab_website/researchpapers/Gibson_&_Fedorenko_2013_LCP.pdf

http://tedlab.mit.edu/tedlab_website/researchpapers/Gibson_et_al_2013InPress_LCP.pdf

http://web.mit.edu/kylemaho/www/SNAP.pdf

3 Comments

  1. Thanks for this post Ted, you hit on some key points IMO. I scrolled down to add my own view, as a non-syntactician concerned with/fascinated by the methodologies utilized in much of the relevant work. But I’m happy to see that my perspective has already been clearly articulated. (Well put Gary.)

  2. Thank you for bringing Pesetsky’s talk to my attention — the provided link to the slides is now broken, but here’s a working one http://web.mit.edu/linguistics/people/faculty/pesetsky/Pesetsky_LSA_plenary_talk_slides_2013.pdf

    I have a simple answer to the question of why the cognitive science community (never mind the public) doesn’t take formal syntax seriously. The questions generative syntacticians have been trained to answer are questions of their own making.

    It’s not just that the hypotheses are too complex relative to the quantity of data being explained (although there is that, too), but that the problems being “solved” by these hypotheses are theoretical constructions of the linguists themselves.

    Not a single example provided by Pesetsky makes reference to any cognitive process, any actual language use, or any real world consequence of the theories he espouses.

    Compare that to *everything else* studied by cognitive scientists. Theories of memory aim to predict how people actually remember things. Theories of attention, how people attend. Theories of perception, how people perceive. Theories of language, how people learn, comprehend, and produce language.

    In comparison, theories of syntacticians predict what exactly? How people make grammaticality judgments? But people don’t make grammaticality judgments! That’s an artificial task constructed by linguists. If the assumption is that grammaticality judgments predict what people actually DO with language, then why not study that?

    So no, the problem here is not education. The problem is that generative linguists (especially syntacticians) developed an elaborate set of theories to explain problems of their own making (why can you say “I wanna meet Mary” but not “Who do you wanna meet Mary”) with no connection to cognitive processes or behavior. They marginalized language to such an extent that no one outside of their field can recognize it. They then expect cognitive scientists (and the public!) to care about these completely field-internal theoretical constructs. Sorry, no dice.

  3. One problem you’ve got is that everyone speaks and writes and so has some kind of knowledge of language. And what they’re concerned about how they should use language. How do you get around that concern? How do you get people to understand that there’s an entirely different approach to language? Scholars have been hammering away on the distinction between description and prescription for years, and it still hasn’t gotten through. How do you get to a place where that issue doesn’t even arise?

Leave a Reply

Your email address will not be published. Required fields are marked *