CHAPTER TWO: BEYOND THE COMPETENCE-PERFORMANCE BARRIER

1.Introduction

In Chapter One, we saw that the CPD was not one unitary distinction, but a cover-term for several fundamentally distinct dichotomies. Its effectiveness in hermetically sealing off the activities of linguists from interaction (not in principle, of course, but in practice) with those of other scholars, such as cognitive psychologists, is thus significantly reduced. In the present Chapter, we investigate in detail what such interaction might imply in practice.


2.Reinterpretations

We could start almost anywhere in the psycholinguistic literature, as far as reevaluating experiments from a Cognitive Linguistics standpoint is concerned, but let us look first at Johnson-Laird (1970). He takes as given the linguistic arguments for the notion deep structure, and proceeds to:


...inquire just how deep structure would be retrieved by the listener. He probably uses two sorts of cue. The first sort stem from surface structure and his implicit knowledge of how this is related by grammatical transformations to deep structure. For example, these cues will reveal that certain sentences are in the passive voice, and enable the appropriate alteration to be made in the semantic roles of the two noun phrases. It is implausible that the passive would in fact be "detransformed" into an active sentence. This would involve permuting the noun phrases and so it could only start after the entire sentence had been spoken....

The second sort of cue to deep structure arises from the listener's knowledge of the properties of lexical words. He cannot grasp the deep structure of many sentences unless he knows the structural meaning of lexical items they contain. Consider the sentences: "John promised the man to escape" and "John persuaded the man to escape". If the listener were unfamiliar with "promised" and "persuaded", he would not know that the subject of "escape" was "John" in the first sentence, but "the man" in the second sentence. Deep structure is supposed to be a prerequisite for meaning, yet here its analysis seems to depend upon an aspect of meaning.... Its analysis must be envisaged as polarized into two separate components, parallel to the two sorts of cue that have been discussed.... In this way, deep structure loses its independent psychological status -- though its function is none the less real, and the arguments against its psychological reality may be reconciled with those for its linguistic necessity." (ibid,266-267)


I have quoted at length here, as this sort of reasoning is in many ways typical of a psycholinguists that is influenced by the unitary CPD. For example, it is not clear that Johnson-Laird is himself clear what any alternative to "detransforming a passive sentence into an active one" would actually involve. His objection to detransformation is that it could only be performed once the whole sentence had been heard, bu it is not clear to me that the alternative he hints at (reassigining semantic roles to the relevant noun phrases) could be carried out any earlier. The problem that concerns him is, of course, the well-known fact that on-line processing of language is carried out on a segment-by­segment basis in real time, and the hearer is constantly making and modifying hypotheses about the structure/meaning of what he is hearing well before the end of any sentence is reached.

But, from a Cognitive perspective, what is particularly striking is that Johnson-Laird implicitly assumes that, whatever conclusions Psycholinguists might come to on such matters, they would in principle be completely independent of what linguists might conclude on what -- if it were not for the CPD -- might seem to be exactly the same questions. With respect to Chapter One, then, what we are referring to is in fact C/P 1-4 -- the dogma that linguists should concern themselves only with a theoretical object that is neutral as between speaker, listener, writer and reader. Now that we have isolated C/P 1-4 from the rest of the CPD "monolith", we are at liberty to ask why no positive justification exists in the literature (as far as I am aware) for this particular dogma, and why we should not ignore it in the interests of more fruitful cooperation between linguists and psychologists than has been possible under the aegis of the CPD.

The CPD also infects Johnson-Laird's discussion of what he calls the second sort of cue to deep structure. Here he concludes that deep structure has no independent psychological status -- yet he stresses that it has a real function and that it is linguistically necessary. It is vital to note that his conclusion that "deep structure" is not psychologically real is based on solid argumentation, whereas his assertions as to its "real function" and "linguistic necessity" are seemingly based on faith alone -- faith in C/P 1-4, and faith in the linguistic arguments for Deep Structure, in which connection it is worth noting that many linguistic theories (e.g. Generalised Phrase Structure Grammar, to name but one) find this concept unnecessary. If the C/P 1-4 dogma is rejected, psycholinguists will no longer need to passively accept the output of purely linguistic argumentation as "given": If deep structure, for example, is shown to have no psychological reality, the cognitive psychologists should collaborate exclusively with those Linguists whose theories do not include that notion. In that way, real progress can be made -- as it has not been, and will not be if we continue to wait for linguists and psychologists to put their separate houses in order, before they are entitled to regard their work as mutually relevant.

Some of the earliest psycholinguistic work that was influenced by the CPD was made to seem to be a blind avenue, because of the influence of the CPD. Miller (1962) formulated the hypothesis that transformational rules might correspond to mental processes in the comprehension of sentences. Experiments showed, however, that there was no relationship between the number of transformations underlying a given surface structure (i.e. its "derivational complexity") and how easy it was to comprehend. But, far from concluding that this disproved a fundamental tenet of Transformational Generative Grammar, the CPD was invoked, and this line of research was abandoned as being irrelevant to "Competence". Now we can perhaps see the boot as being on the other foot, as it were, and return to this old, but important research -- and ask whether it does not show that it is transformational rules -- not the empirical research -- that should be abandoned as being irrelevant. At the very least, different arrangements of transformations and different models of Transformational Grammars should be tested for their effect on comprehension, to see if some are more accurate in reflecting processing than others.

Carroll (1979) takes the above research one step further:


Neo-structuralist investigators of sentence comprehension have for the most part given up the claim that grammatical theory provides a comprehensive basis for describing the cognitive and perceptual processes that isolate sentence comprehension units. However, they have attemptd to maintain the claim that the processing units employed during sentence comprehension activity are isomorphic to grammatically defined structures.... We must now ask whether even the "weakened" syntactic theories are tenable. Can sentence comprehension units be defined exclusively in terms of levels of syntactic structure ?...(Anderson (1976) and others) emphasize the propositional character of meaningful units.... And syntactic theorists have described comprehension processes designed expressly to parse together propositional sequences (e.g.Bever, 1970; Kimball, 1973). However... no level of syntactic structure enumerates all and only propositional sequences (Carroll, 1978; Carroll, Tannenhaus, and Bever, 1978).

(My) program of research in sentence comprehension begins from the assumption that the object of comprehension is the isolation and integration of propositional meaning units and that the processing capabilities that can be recruited for these tasks is (sic) limited.... Within this Functionalist program, it is natural to expect that a variety of syntactic structures might serve as comprehension units. Comprehension, unlike grammatical theory, is driven by the needs and limitations of human information processing.

A linguistic sequence is functionally complete to the extent that it completely and explicitly presents all the members that make up a coherent proposition: "subject-verb-(object)". Functionally complete sequences are accordingly predicted to be more effective sentence-comprehension units than functionally incompleted sequences -- regardless of levels of syntactic structure.


Given that Carroll goes on to describe two experiments which, he claims, support his hypothesis, and assuming (for the sake of argument ) that one were to agree with this claim of his, what consequences emerge for grammatical theory ? Under the unitary CPD, none at all. However, if we discard C/P 1-4 (with or without retention of certain of the other C/P pairs), then ordinary linguists must come to terms with the possibility that some (or even most) of the syntactic terms that have been and are being used commonly in some schools of Linguistics to describe grammatical structure must be jettisoned and replaced with terms that are compatible with experimental findings such as those of Carroll -- certainly as far as language-comprehension is concerned. It is possible that such terms would still be necessary to describe language-production. I must stress that I am not here expressing any opinion as to whether Carroll's thesis is correct in fact: my aim is just to show the kinds of readjustment that Linguists might have to undergo if it were generally agreed (as I suggest) that the C/P 1-4 distinctions be ignored or abolished.

Bever (1977) proposes the following syntactic rule for English:


Delete the complementizers, except in sentence-initial position, in which case at least one complementizer must remain.


He gives the following examples:


(1) *Sam was a fool was mentioned by John.

(2) That Sam was a fool was mentioned by John.

(3) The fact that Sam was a fool was mentioned by John.


Assuming that Bever is correct (and, again, I am not taking a position here as to whether he is correct or not), one is left with the problem of duplication: The same rule would have to appear in both the Competence and the Performance components of any full grammatical model. If one felt one had to choose between calling Bever's rule a Competence rule or a Performance rule, then it would clearly have to be a "Performance" rule, as no standard Autonomist "Competence" rules contain phrases such as "at least one". Yet it must be a "Competence" rule too, as it is the reason for the "ungrammaticality" of (1) (if Bever is right). Abolishing C/P 1-4 allows us to have a single rule for such otherwise troublesome cases.

Bates, Kintsch, Fletcher, and Giuliani (1980) reports on psycholinguistic experiments concerning Anaphora against the following theoretical background:


For 10 to 15 years, cognitive psychologists and psycholinguists have subscribed to what might be called the "wheat vs. chaff" theory of verbal memory. This view grew out of a distinction between "deep" and "surface" memory that George Miller (1962) introduced into the memory literature, together with a prediction that deep structure would be retained and surface structure discarded in long term memory. The compatibility between this view and extant theories of generative grammar should be obvious (e.g. Chomsky, 1957). Data suppoting this view include a report by Sachs (1967) that subjects cannot distinguish between active and passive paraphrases of a target sentence within 40 seconds after reading it in a prose passage, and a report by Garrod and Trabasso (1973) that recognition memory for surface form disappears within 40-60 syllables of the input. (op.cit.,41)


The authors report on four related experiments which bear on the "wheat vs chaff" theory as a general account of the processes at work in language processing. Sachs (1967) and Garrod and Trabasso (1973) involved artificial prose and laboratory situations. What inspired Bates et al., however, was the fact that two further studies, Kintsch and Bates (1977) and Keenan, MacWhinney and Mayhew (1977), used "real life" situations and produced results that were diametrically opposed to those of the former two studies. Kintsch and Bates (1977) carried out memory-testing 48 hours and as much as five days after the events concerned, and their results showed that:


recognition memory for surface form is much better in real life than in artificial situations, and the probability that a surface form will be retained is at least in part a function of how that form stands out in the original discourse situation. (Bates et al.: 1980,42)


The notion of a form "standing out" in a discourse (which was how Kintsch and Bates characterised the distinction between jokes and asides, on the one hand, and other components of classroom lectures, on the other) was further clarified by Keenan et al. (op.cit.). They employed the dichotomy of terms high in "interactive value" (jokes, figures of speech, insults, and abuse of speaker and audience), as opposed to those low in interactive value. Subjects proved able to distinguish originals from paraphrases only in the former category of utterances. A control group of subjects who had not been present at the live event and were given the same utterances in random order showed no difference between the two categories of utterances -- showing that memorability depended on situation, rather than purely linguistic factors.

In the light of the above results, Bates et al. reinterpret Sachs (1967) as follows:


... some surface forms (e.g. the active/passive distinction) carry a pragmatic meaning which is available only in live discourse. Such pragmatic distinctions are lost in artificial or "cold" written prose, but are alive and meaningful in natural conversations and narratives. (ibid,42)


It was to test this tentative conclusion that Bates et al. conducted the four experiments on the relatively well-defined distinction between anaphoric and explicit forms of reference.

Their results, in sum, confirmed those of Kintsch and Bates (1977) and Keenan et al. (1977). This confirmation was found to hold for both narrative and dialogue, and for both English and Italian. In addition, explicit reference was remembered better than default reference, where nouns are considered more explicit than pronouns, and pronouns more explicit than zero (i.e. subject deletion in Italian). In this connection they conclude:


Presumably explicit forms are more memorable because they play some kind of a marked role in conversation, compared with the "default" or unmarked case of anaphoric reference (ibid,44).


Bates et al. were also able to establish that subjects remembered, and did not just reconstruct, the forms of reference that were used. Moreover, by substituting anaphors for explicit forms, and vice-versa, they were able to establish that it was not the explicit forms themselves that were more memorable than the anaphors -- it was the discourse "surrounds" of such explicit forms that made the noun phrase memorable. In addition, subjects who tried to guess how explicit a noun phrase should have been with the benefit of the complete discourse context to aid them did little better than chance. They conclude:


This suggests that disambiguation of reference is not the only factor governing choice of surface forms.... It may be that the decision to use an explicit form of reference involves the need to highlight, dramatize, mark topic shifts and important points, and generally "stage" utterances in a way that assures the listener's attention.... Certainly the notion of "drama" will be more difficult to formalize than the notion of referential ambiguity. But a full linguistic theory of pronominalization and anaphora may need to include such concepts if we are to describe and explain the rules of reference in natural discourse. (ibid,47)


One of the most prominent researchers in the area of speech-processing in recent years has been W. Marslen-Wilson, together with his associates. I would like to turn now to Marslen-Wilson (1974), which addresses precisely those issues that have been concerning us in the present chapter.

He poses the question as follows:


Given a theory of sentence processing, what kind of linguistic theory does it imply ?... Does one need ... to separate a mental competence grammar ... from a distinct performance grammar ... ? It is difficult to see ... the a priori justification for this proliferation of internal grammars. As a working hypothesis, I will assume that there is only one internal representation of linguistic knowledge, and that the specification of this is the target both of cognitive psychologists and of those linguists who are so inclined. (ibid, 409-410)


It will, I hope, be clear to the reader that the present writer is one "of those linguists who are so inclined". I consider myself a Cognitive Linguist principally because I see Cognitive Linguists as being the only linguists who can fruitfully cooperate with cognitive psychologists on a unitary model of human language along the lines sketched above by Marslen-Wilson. A prerequisite to such cooperation is the dismantling of C/P 14 (see Chapter One), i.e. the boundary between "linguistic" and "nonlinguistic" mental faculties, and of C/P 1-4, i.e. the distinction between linguistic "structure" in the abstract, and linguistic "processes" exemplified in speaking, reading, writing, and listening.

It is worth stressing that the cognitive psychologist has as much right to ask what kind of linguistic theory fits his experimental results as the linguist has to ask the converse question. The latter has been more frequently asked than the former, it seems to me. Yet, since the amount of disagreement at every level-- from the most detailed to the most general -- is far greater among linguists than among cognitive psychologists, we would be much more likely to achieve real progress by letting the latter set the agenda, as there is no indication that even Generativists alone are likely to achieve consensus on a competence model of English (their most-studied language) in a finite time-span.

The essential factor, in Marslen-Wilson's view, that makes Generative Grammar fail the test of compatibility with processing models is the element of time:


Formal linguistic descriptions are constructed without reference to time. Entry into the system, whether from top or bottom,, is assumed to be simultaneous across the entire string theories that are more or less compatible with this asssumption. Namely, theories in which the development of a syntactic and semantic representation is delayed until towards the end of a clause or sentence .... But the time-course of this process has never been clearly specified, so that there is no step-by-step information-processing description precise enough to enable one to determine just where and when the input interacted with the listener's linguistic knowledge (ibid, 410-411).


Marslen-Wilson (1974) addresses this deficiency by carrying out experiments on the time-dimension. Specifically, he tries to establish exactly when in the comprehension-process what he calls "higher-level information" (i.e. semantic and syntactic information) becomes available to the listener for application to the task of "lower-level" (i.e. lexical and phonological) description of succeeding words in the string. Without going into the details of his actual experiments here, one can conclude, as he himself does,


... that the listener's linguistic knowledge is available to him, and is used by him, in a way that is inconsistent with the basic organizing principles of transformational structural descriptions (ibid, 411-412).


Specifically, he develops an "Interactive Parallel Model of Sentence Perception" (though the word "parallel" is not meant to imply any separation of the various forms of linguistic processing). He finds evidence of a lot of "top-down", as well as "bottom-up" processing:


The processing system appears to require a grammar that works from left-to-right, usually on a single pass, and where the possible constructions in a language are represented in such a way that each word, as it is heard, is immediately interpretable in terms of the possible continuations of the string with which it is compatible (ibid,416).


Other work by Marslen-Wilson and associates has elaborated the "Interactive Parallel Model", renaming it the "On-Line Interactive" theory. Marslen-Wilson and Tyler (1980), for example, contrasts their model with the "Serial Model", which arose under the influence of Generative Grammar, and which has been described in Carroll and Bever (1976), Fodor, Bever, and Garrett (1974), Forster (1979), Levelt (1978), Marslen-Wilson (1976), and Tyler (1980). This serial model is the one alluded to above, which assumes that information flows from the bottom up only:

A processor at any one "level" has access only to the stored knowledge which it itself contains, and to the output from the processor at an immediately lower level. Thus, for example, a word-recognition component can base its recognition decisions only on the knowledge it contains about the words in the language, and on the input it receives from some lower-level analyser of the acoustic-phonetic input. Given such an input, the word-recognizer will determine which word, or words, in the language are the best match, and then pass this information on to the processor at the next stage of analysis. What it cannot do, in a strictly serial system, is to allow a processor higher in the system to intervene in its internal decision processes -- either when ambiguous cases arise, or simply to facilitate the recognition of unambiguous inputs. Each component of the processing system is thus considered to be autonomous in its operations (Marslen-Wilson & Tyler, op. cit., 3).


As the authors point out, the strong version of the serial theory as to the relationship between the syntactic processor and the semantic processor has been abandoned. Forster (1979), for example, has admitted that it is not plausible that semantic interpretation is delayed until the syntactic clause or sentence boundary is reached. That author claims that the input is divided into "viable syntactic constituents", which are the units which are passed up to the semantic processor. However, no definition is provided for "viable syntactic constituent", making it hard to test the claim that Forster is making.

Despite the lack of such a definition, however, Marslen-Wilson and Tyler point out that there is a clear, testable prediction inherent in the serial model that there will be a relative delay in the outputs of the various processors.

It would be inappropriate to go into detail here, but we can note that the results of Marslen-Wilson and Tyler's experiments seem to show that sensory (phonetic) and contextual (syntactic/semantic/pragmatic) inputs interact at the same stage of processing, so the relative delays predicted by the Serial Model do not occur. Moreover, word-recognition occurred after about the first two phonemes had been heard, at which stage (for the words used in the relevant experiments) the median number of possible words in the English language that the hearer had to choose between was 29. There would be no way for just one word to be selected as per the Serial Model, i.e. by waiting until the word was selected before using contextual information.

The exact timing of such interactions is still controversial. In particular, Marslen-Wilson (1987) revises his earlier views on this matter somewhat, whereas Zwitserlood (1989) reports experimental results which favour Marslen-Wilson's earlier, over his later proposals. I discuss some other research on this topic below.

Marslen-Wilson's earlier results bear importantly on the C/P 10 distinction (between sentence-grammar and the linguistic context). They found, in fact, that discourse context can interact with the processing of the first words of a new sentence, which means that, in processing terms, there is no C/P 10 distinction. If we, along with Marslen-Wilson and Tyler, reject also the C/P 1-4 distinctions (between Abstract Grammar and the Speaker, Writer, Reader and Listener roles), then we are committed to (virtually) abandoning the sentence-boundary as a linguistically interesting notion.

This general line of research has continued into the present. It is not my intention here to carry out a comprehensive survey of the relevant literature, but it may well be appropriate to give some further indication of some of the ways these ideas have been followed up.

Those works (e.g. Masson (1988) and Cowart and Cairns (1987)) which investigate reading are not necessarily directly relevant to the results on listening that Marslen-Wilson and associates report on. Interestingly, it seems to be precisely those works (on reading) that come out most strongly against the On-Line Interactive Theory. Cognitivists who reject the notion of Abstract Grammar, as per C/P 1-4, will find no difficulty with conclusions that tend to show that reading and listening differ to some extent in the way various otherwise common factors interrelate.

McAllister (1988) reexamines Tyler's (1984) hypothesis that hearers do not use contextual information during their processing of the early parts of auditorily presented words. He found that, on the contrary, hearers made use of contextual information even during the processing of the first 50 msec of test words. These results are more compatible with the Logogen Model (Morton 1969) than with the Cohort Model proposed by Tyler (1984). McAllister attributes the discrepancy between the results obtained by himself and by Tyler to misinterpretations by subjects of the contextual cues provided in the latter's experiments.

Bard et al. (1988) report on the recognition of words after their acoustic offsets in spontaneous speech. They found that prior context, acoustic information and subsequent context all have roles in the processing of connected speech. The use of subsequent context is not a marginal phenomenon, and is related to the amount of prior context available to the Listener, and to the length of the word involved -- the shorter the word, and the less prior context is available, the more likely is subsequent context to be employed.


Within the framework adopted here, ... late recognition ... emerges from the regular functioning of a mechanism in which alternative hypotheses are entertained simultaneously until such a time as there is sufficient evidence to select one hypothesis above the others .... For a substantial proportion of the time, this ... processor must be pursuing hypotheses across a stretch of input corresponding to more than one word (ibid, 407).


However long this stretch of input actually is, it must be retained in memory in some form, presumably. It seems to me plausible to assume that it is stored in phonological form -- as discussed in Chapter Three in relation to the Phonological Identity Condition.



3.An Autonomist counter-argument


The Generative camp has produced a rebuttal of Marslen-Wilson's approach to C/P 1-4, in the form of Berwick and Weinberg (1981). This rebuttal centres on a hypothesis, which the authors label the Type Transparency Hypothesis (TTH), which they consider to be inherent in both Marslen-Wilson's approach, and in the approach of those, discussed above, who conducted research into the extent to which derivational complexity was reflected in comprehensibility.


Generative linguists have insisted that the grammars they construct should be viewed as central components of psychological models of language use.... This insistence is motivated by the reasonable assumption that a speaker/hearer should use the knowledge of his language (which linguists assume is described by linguistic theory) when processing or producing sentences.

It has also frequently been proposed that grammatical models be realized more or less directly as parsing algorithms. Evidently we are to impose the condition that the logical organization of rules and structures incorporated in a grammar be mirrored rather exactly in the organization of the parsing mechanism actually employed in sentence processing. We will call this the condition of Type Transparency. The Type Transparency Hypothesis makes a much stronger claim than one that holds simply that knowledge of language should guide the use of language: it claims that the principles employed to describe the system of knowledge that makes up the language faculty should also provide an adequate description of that system's implementation in language use (ibid, 2-3).


The authors identify three versions, graded according to strength, of this TTH:


1. The strongest version, which would require an isomorphism between rules and operations of the grammar and the corresponding rules and operations of the parser.


2. The intermediate version, which would require that the parser merely preserve distinctions made in the grammar (i.e. allow a homomorphic mapping); then the parser would be free to make additional distinctions.


3. The weakest version is that attributed by the authors to Bresnan (1978), according to which only the distinctions between types of grammatical rules must be preserved (as distinctions between types of parsing operations).


Indeed, even this weakest of the three versions of the TTH is not weak enough for Berwick and Weinberg:


... transparency is not a necessary property of a parsing model. If future experiments show that this direct mapping is untenable, then researcheres interested in constructing a theory of language use should still be interested in the theory of linguistic competence, to the degree to which we can use this theory to constrain the class of possible parsers.... By using a relation among a class of grammars known as covering, one can demonstrate that a praser may be able to exploit the rules of a grammar non-transparently .... Informally, one grammar G1 covers another grammar G2 if (1) both generate the same language (L(G1) = L(G2), that is, the grammars are weakly equivalent) and (2) we can find the parses (structural descriptions) that G2 assigns to sentences by parsing the sentences using G1 and then applying a "simple" (easily computed) mapping to the resulting output (ibid, 46-47).


Subsequently, however, the authors do acknowledge that some researchers (e.g. Tyler (1980)) suggest dispensing with a level of purely grammatical characterisation at all. This is a crucial point, as the hypothesis that one needs a grammatical description that is neutral as between speaker and hearer as per C/P 1-4, which I will cal the C/P 1-4 Hypothesis, is logically prior to the TTH. In other words, if you do not accept that there needs to be a C/P 1-4 distinction (or set of distinctions) in the first place, it is superfluous to worry about what the relationship should be between C 1-4 and P 1-4.

That is the main drawback (apart from the seeming irrelevance of Abstract Grammar anyway, if the relationship between it and processing/production processes is to be so attenuated) to the thesis propounded by Berwick and Weinberg (op. cit.). About ninety percent of their paper is is taken up with a discussion of the TTH, and only about half of the remainder is spent on the C/P 1-4 Hypothesis.

The authors defend the C/P 1-4 Hypothesis as follows: First, they state that one can use the theory of Competence to constrain the class of possible parsers. However, it is doubtful if this fact is of practical benefit or relevance, for two reasons: The first is that, if the relation of covering is all that is involved, it is not clear that a highly elaborate grammar, such as those emanating in successive waves from MIT, and which are based on disputed and /or in principle dubious Descriptive Grammaticality Intuitions (see Chapter One), would differ significantly in its coverage from just any ad hoc (but useful) collection of grammatical rules that a working Psycholinguist might concoct to suit his purpose. The second reason is that Generative Grammars are based on the notion of Descriptive Grammaticality, and it is not clear to me that this notion plays any part in language processing (certainly), or even in language production (where Normative Grammaticality would be the only sort of "Grammaticality" that had any relevance).

The authors' second argument in favour of the C/P 1-4 Hypothesis is that it enables the working Psycholinguist to distinguish what is being computed from how it is being computed. However, if one, like myself, denies the validity of the C/P 1-4 Hypothesis, and limits his grammatical descriptions to (in essence) production and processing strategies, it is inherent and inevitable that the units that are the "targets" of the strategies would have to be just as clearly (and, for that matter, just as unclearly) defined as they would be in a corresponding abstract-linguistic description. So the poroblem of distinguishing the what from the how would be no more or less great than it would otherwise be.

The authors' third argument relates to language learning, but it is stated so cursorily that it is impossible to evaluate it.



4.Conclusion


I will conclude by mentioning what I take to be the most obvious theoretical benefit of abolishing the C/P 1-4 set of distinctions: the simplification achieved by avoiding duplication. Speaker-hearer neutral rules need no longer exist in addition to Processing Strategies and Production Strategies.

Back to Contents Page