Charles Adamson comments on Holland's model

From the Chaosla listserv (a listserv dedicated to the study of chaos and complexity theory as applied to second language acquisition), Charles Adamson applied Holland’s model to language itself rather to the processes through which students produced it. What follows is our conversation with some paraphrasing, integrating of emails, and adapting for this forum.

Charles Adamson wrote:

Properties

Aggregation: This would seem to take place on a number of levels - letters into words, words into phrases, phrases into sentences, sentences into paragraphs, paragraphs into sections or chapters, and sections or chapters into complete works.

Nonlinearity: At every level language is nonlinear; the whole is not predictable for the parts. For example is we have the words the, man, dog, and kills. We can generate /The man kills the dog/ or /The dog kills the man/. We also have sentences like /The horse raced pace the burning barn fell/ which is almost impossible to understand the first time it is seen.

Flows: Initially, I was thinking that it was the flow of information, but maybe it is the control that each additional word in a sentence exerts on the potential words to follow. Each additional word in a sentence limits the pool of potential sequences that can follow it. Also it is obvious that the string of words has a strong role in selecting the tags that can be active for the following words. This might be considered the movement of resources.

Diversity: This refers to the variety of word and sentence types, parts of speech, etc.

Mechanisms

Tagging: Words are tagged with both a meaning and a part of speech. These interact and determine the possibilities for the use of the word in context. I might mention that Robin Facett, a Hallidayan researcher, determined that there are something just over 300 slots in a generalized sentence. This means that there are just over 300 parts of speech since only certain words can go in each slot.

Internal models: This would seem to refer to the patterns that we can extract from the vicinity of a word. These patterns are strong enough that it is possible to generate an index number consisting of the sum of the inverse general frequencies of the three words on each side of the target word. This index separates the various senses of a word, in other words, the meanings.

Building blocks: Words and affixes, which become all the other things in language.

My comments:

Charles A.’s application modifies Holland’s model a little. In the model, internal models are mechanisms agents use to anticipate. Thus, if language is the system, we might consider words as "anticipating" (through tags) where they would fit in (or interact with) a particular aggregate of words. Although it doesn’t really make sense to me that words can anticipate. Even so, Brent Davis, a prominent complexity science researcher in mathematics education at the University of Alberta, considers ideas to be agents.

I’ll need to think some more about his suggestions concerning flows as those are concerned with the flow of resources among agents, but rephrasing him, it is an interesting idea to equate “enabling constraints” (another concept I acquired from Brent Davis) with resources.

Charles A. expanded more on internal models and the concept of anticipation:

The word 'the' will have an internal model where 'the' will be followed by modifiers (including a null modifier) and then a noun or nouns. This model will restrict the models of any following word to its internal model of being a modifier or a noun. Another example would be that verbs with their internal models that specify, among other things, the number of objects and whether or not the grammatical subject of the sentence is animate or inanimate.

I do have one problem with the use of 'anticipate' in relation to the internal models. Linguistically the internal model of the word 'anticipate' requires an animate, sentient grammatical subject. We can generally ignore this fact, but it is like proverbial rotten apple, given time it can cause all sorts of problems. It becomes very easy to start attributing other characteristics of sentient beings to the model. However, the model does not anticipate, it exists. We, the humans, anticipate when we think about the language processes associated with the word.

My comments:

Charles is right that problems occur when we apply attributes of sentience to inanimate agents. Holland draws upon biology for his model, and so, although it gives some insight to language, adaptations may be needed to use it with language as a system. I suppose we will need to see what is gained and what is lost when we do so.