Assessment

The Stanford Center for Teaching and Learning has an excellent email newsletter for professional development with respect to, as the name indicates, teaching and learning. These articles can also be discussed at Tomorrow's Professor Blog. Here's an example of their newsletters (the most recent emailing) titled The Ten Worst Teaching Mistakes, which are:

  1. When you ask a question in class, immediately call for volunteers.
  2. Call on students cold.
  3. Turn classes into PowerPoint shows.
  4. Fail to provide variety in instruction.
  5. Have students work in groups with no individual accountability.
  6. Fail to establish relevance.
  7. Give tests that are too long.
  8. Get stuck in a rut.
  9. Teach without clear learning objectives
  10. Disrespect students.

The newsletter is somewhat brief with each item receiving a 1-3 paragraph explanation of the item. For instance, on disrespecting students, it states,

How much students learn in a course depends to a great extent on the instructor's attitude. Two different instructors could teach the same material to the same group of students using the same methods, give identical exams, and get dramatically different results. Under one teacher, the students might get good grades and give high ratings to the course and instructor; under the other teacher, the grades could be low, the ratings could be abysmal, and if the course is a gateway to the curriculum, many of the students might not be there next semester. The difference between the students' performance in the two classes could easily stem from the instructors' attitudes. If Instructor A conveys respect for the students and a sense that he/she cares about their learning and Instructor B appears indifferent and/or disrespectful, the differences in exam grades and ratings should come as no surprise.

Even if you genuinely respect and care about your students, you can unintentionally give them the opposite sense. Here are several ways to do it: (1) Make sarcastic remarks in class about their skills, intelligence, and work ethics; (2) disparage their questions or their responses to your questions; (3) give the impression that you are in front of them because it's your job, not because you like the subject and enjoy teaching it; (4) frequently come to class unprepared, run overtime, and cancel classes; (5) don't show up for office hours, or show up but act annoyed when students come in with questions. If you've slipped into any of those practices, try to drop them. If you give students a sense that you don't respect them, the class will probably be a bad experience for everyone no matter what else you do, while if you clearly convey respect and caring, it will cover a multitude of pedagogical sins you might commit.

The article also gives references for further reading, most of which can be found online:

  1. R.M. Felder and R. Brent, "Learning by Doing," Chem. Engr. Education, 37(4), 282-283 (2003), http://www.ncsu.edu/felder-public/Columns/Active.pdf.
  2. M. Prince, "Does Active Learning Work? A Review of the Research," J. Engr. Education, 93(3), 223-231 (2004), http://www.ncsu.edu/felder-public/Papers/Prince_AL.pdf.
  3. R.M. Felder and R. Brent, "Death by PowerPoint," Chem. Engr. Education, 39(1), 28-29 (2005), http://www.ncsu.edu/felder-public/Columns/PowerPoint.pdf.
  4. R.M. Felder and R. Brent, "Cooperative Learning," in P.A. Mabrouk, ed., Active Learning: Models from the Analytical Sciences, ACS Symposium Series 970, Chapter 4. Washington, DC: American Chemical Society, 2007, http://www.ncsu.edu/felder-public/Papers/CLChapter.pdf.
  5. CATME (Comprehensive Assessment of Team Member Effectiveness), http://www.catme.org.
  6. M.J. Prince and R.M. Felder, "Inductive Teaching and Learning Methods: Definitions, Comparisons, and Research Bases," J. Engr. Education, 95(2), 123-138 (2006), http://www.ncsu.edu/felder-public/Papers/InductiveTeaching.pdf.
  7. R.M. Felder, "Sermons for Grumpy Campers," Chem. Engr. Education, 41(3), 183-184 (2007), http://www.ncsu.edu/felder-public/Columns/Sermons.pdf.
  8. P.A. Cohen, "College Grades and Adult Achievement: A Research Synthesis," Res. in Higher Ed., 20(3), 281-293 (1984); G.E. Samson, M.E. Graue, T. Weinstein, & H.J. Walberg, "Academic and Occupational Performance: A Quantitative Synthesis," Am. Educ. Res. Journal, 221(2), 311-321 (1984).
  9. E. Seymour & N.M. Hewitt, Talking about Leaving: Why Undergraduates Leave the Sciences, Boulder, CO: Westview Press, 1997.
  10. R.M. Felder, "Designing Tests to Maximize Learning," J. Prof. Issues in Engr. Education and Practice, 128(1), 1-3 (2002). http://www.ncsu.edu/felder-public/Papers/TestingTips.htm.
  11. R.M. Felder & R. Brent, "Objectively Speaking," Chem. Engr. Education, 31(3), 178-179 (1997), http://www.ncsu.edu/felder-public/Columns/Objectives.html.

All in all, this newsletter is a great resource for teachers.



Shin and Cimasko, in an article in the latest issue of Computers and Composition, analyzed multimodal web page arguments of ESL students in a first-year composition course.

Findings
Their findings include

  • All students placed a higher priority on the linguistic mode, that is, the written word.
  • Only one student used imagery in all of his drafts, although other students used them by the final draft.
  • Only one student included audio files (after the instructor recommended it).
  • Hyperlinks were used but primarily for bibliographies instead of from within the texts of their arguments.
  • Although the written word predominated, non-linguistic additions "added new meanings ... as representations of emotional dimensions that could not be conveyed easily--or appropriately--in traditional academic discourse" (p. 388).
  • Layout in terms of background design and font choice was strongly influenced by written essays with only a few students attempting some variation of color.

Explanation of findings
Shin and Cimasko state that these findings are likely due to

  • students' prior experiences,
  • the writing practices of the students' communities,
  • the context in which they wrote their texts, and
  • students' perceptions of multimodal texts.

As Shin and Cimasko wrote,

Multimodal composition was interpreted as a distraction from the primary goal of developing academic capability through written language. The students thus opted for the traditional and established centrality of linguistic design, resisted new modalities, and applied those new modalities that were used in ways that did not take full advantage of their rhetorical potential.

Comments on article
These findings make sense. People do what they're accustomed to doing and according to their expectations of how they should do something. And it's commonly understood that visuals can enhance understanding in ways that the written word alone cannot do. (However, see "Using videos" in Home Schooling and Videos). And with the authors, I accept that learning the same rhetorical concepts in different modes can enhance understanding of those concepts although the evidence for this position is not without some qualification (see, for example, Multimodal Learning Through Media). Having said that, I question the need for multimodal writing in freshman composition that the authors propose.

The authors support multimodal writing because they believe,

multimodal approaches to composition provide writers who are having difficulty in using language, including those writers for whom English is a second language (ESL), with powerful tools for sharing knowledge and for self-expression. ... ESL students need to gain knowledge of how to use non-linguistic modes at the same time that they are developing their English writing abilities. (p. 377)

However, it's not clear to me that:

  • ESL students in first-year composition need to learn these tools, or that
  • First-year composition is a course that should include self-expression.

Some of the tools noted in the article included using audio, video, and animation. Will they really need these tools in future course work or in future careers? Including instruction on new tools requires time. Should time be sacrificed for learning these modes instead of working on written genre conventions?

Whether or not first-year composition should include self-expression depends on the purpose of the course. Generally speaking—and despite one's personal position on its purpose—it's considered to be an introduction into academic writing, often academic argumentation. Without entering into the debate on voice and identity, let me just say that my ESL students, mostly Generation 1.5 students, have little problem with self-expression. What they do find difficult is writing in an academic register. In such a context, self-expression is not a priority.

In addition, it's not clear that first-year composition is the best place for students to learn how to use visual modes, especially with respect to self-expression. One reason given for this is that some researchers argue that it is necessary for "developing certain kinds of disciplinary knowledge" (p. 377). I managed to get three of the sources cited, but the support was not strong.

One researcher cited was van Leeuwen, who wrote on three principles of multimodality: information value, salience, and framing. However, he did not argue that they were "necessary" for developing disciplinary knowledge. Still, I can imagine that if these three principles are universal across modes, it would be useful to know them.

A second researcher cited, Ann Johns, wrote on how a single student was adept at using graphs and charts to understand her macroeconomics work. Undoubtedly, graphs and charts are a part of academic writing. However, these sorts of visuals are not the type used for "self-expression."

Along these lines, another scholar cited, Miller, wrote,

In short, visuals in academic articles provide data to convince the reader of the validity of the findings and allow the readers to see how the data were obtained and to interpret the data themselves. These visuals are impregnated with theory (Bazerman, 1988) to show not only that they are anchored in the literature but that they have wider implications.

In journalism however, the writer is interested in presenting news rather than in convincing the reader of the validity of the report. In news articles, findings are highlighted, but the means by which the findings were obtained are placed in the background, just the opposite of in science. The reader is not positioned as knowledgeable but as needing to be enticed into the article. The launching point, therefore, is human interest rather than scientific argumentation. (p. 31)

In other words, the visuals used in academic writing are related to data rather than to self-expression. Interestingly, Shin and Cimasko wrote of "emotional" representation, something more akin to the journalistic perspective of "entic[ing']" readers rather than the academic perspective of "convincing" and supporting an argument—the goal of this freshman composition course.

As noted above, generally speaking, I believe that using different ways of presenting the same information can be a valuable pedagogical tool for explaining concepts of rhetoric and composition. Thus, I take a little time to cover presentation principles, including the need for images, and have my students write essays analyzing visual objects, such as advertisements and website designs, to provide a variety of contexts for the same concepts, thus facilitating, I hope, transfer of their writing knowledge. Even so, I hesitate at "fully integrating [multimodal composing] into the work" (p. 391) of first-year composition, especially of the self-expressive sort, thus taking away time from other principles of composition necessary for the development of my students' "academic" writing.

I hesitate for two connected reasons. One is that most of the "composing" that most of these students will do in later classes and on the job, at least in the near future, will be print-based (although see Alex Reid for an opposing opinion). Yes, they may use data-related visuals later on, but most of the writing in freshman composition is not data driven.

The second is that one learns what one practices, and one learns to the extent that one practices. My students need as much time as possible with the English language, with developing their vocabulary, with learning academic textual conventions. Any time that takes away from that practice is to their detriment academically and careerwise. Think about it. Can you imagine a multi-ball training regime in which a basketball player spends time playing tennis, soccer, volleyball, and handball?

A few resources:
Survey of Multimodal Pedagogies in Writing Programs (Composition Studies)
Taking a Traditional Composition Program "Multimodal (Christine Tulley)
Multimodal Writing (Teaching Writing Using Blogs, Wikis ...)
Standards Related to Digital Writing (Teaching Writing Using Blogs, Wikis ...)
Thinking about Multimodal Assessment (Digital Writing, Digital Teaching)
Center for Digital Storytelling

Works cited:
Johns, A. M. (1998). The visual and the verbal: A case study in macroeconomics. English for Specific Purposes, 17, 29-46. 183-197.
Miller, T. (1998). Visual persusasion: A comparison of visuals in academic texts and the popular press. English for Specific Purposes, 17, 29-46.
Shin, D.-S. & Cimasko, T. (2008). Multimodal composition in a college ESL class: New tools, new traditional norms. Computers and Composition, 25, 376-395.
Van Leeuwen, Theo. (2003). A multimodal perspective on composition. In Titus Ensink & Christoph Sauer (Eds.), Framing and perspectivising in discourse (pp. 23–61). Philadelphia: John Benjamins Publishing Company.



The journal Science has an interesting article Computers as Writing Instructors, an article that stirred up a conversation on the WPA listserv. Some of the concern relates to what Richard Haswell, a professor emeritus of English at Texas A&M University, Corpus Christi, stated in the article:

One peril, says Haswell, who has studied both traditional and electronic measures of writing, is that the programs pick up quantifiable indicators of good writing--average sentence length, for instance--yet ignore qualities such as whether an essay is factually accurate, clear, or concise, or whether it includes an element of wit or cleverness. "Those are all qualities that can't be measured by computer," he says.

When I read such statements, I wonder if supervisors worry about architects using computers to create and modify designs because computers can't measure the aesthetic qualities of the design. The computer is a tool. Of course, any tool can be abused. And if all teachers did were to use the program for assessing student writing and never offered their own feedback, that would be a problem. Still, no one seems to worry about architects using computers.

Flow
One thing I see as good about such tools, if they work (which is a requirement, of course), is that they incorporate conditions of flow, a state of intrinsic motivation, such as:

  1. immediate feedback
  2. clear goals
  3. focused attention
  4. tasks that challenge (without frustrating) one's skills

Motivation is crucial in engaging students to spend time on their writing, to work at improving it. (For more on motivation and flow, see Engagement and Flow.)

Immediate Feedback
Although learning and instruction may meet conditions 2-4, seldom is immediate feedback given in composition classes. In one semester, students might write from three to six essays, depending on the instructor, which means that feedback on essays is given every two to three weeks. In addition, the feedback of peer reviews generally takes place hours after the last version, unless a student pulled an all-nighter for an 8:00 am class. In this case, most of the feedback will be seen through a haze. The feedback of instructors usually occurs days later after they have looked at all of them.

The importance of immediate feedback with cognitive tutors has been demonstrated in teaching LISP, algebra, and geometry. In their abstract, Anderson et al. write,

Early evaluations of these tutors usually but not always showed significant achievement gains. Best case evaluations showed that students could achieve at least the same level of proficiency as conventional instruction in one-third of the time.

Those "best case evaluations" are in the lab where there are no distractions, but even in real classrooms, Anderson and Schunn (pdf) have found achievement gains equal to one letter grade. Learning is directly due to time on task, that is, practice. (Of course, practicing the wrong tasks leads to mislearning.) Thus, providing immediate feedback helps to eliminate wasted time in trying to figure out how to do something, which in turn, decreases the time required to learn a particular activity.

Now, writing is a fuzzier than math. Math usually has a correct answer, while writing doesn't. But perhaps by limiting one's focus to particular aspects of writing, such as coherence, cognitive tutors like WriteToLearn may be of help to students in developing their writing.

Interaction
Alex Reid, however, questions interacting with computers instead of with other students:

The Science article explains that these computer programs are necessary because teachers cannot read and respond to as much student writing as the students should be doing; so the machine reads them instead. Hmmm.... what other possibilities could there be I wonder?  .... Maybe the other students? Maybe the could be reading each others' work? Maybe they could even actually be writing to one another? Maybe they could be using these networks to write to other students around the world? Maybe they could be composing texts that were addressed to other humans rather than to machines and which might actually have some real meaning and value?

I think that interaction with others is important for learning, too, but that does not necessitate an either-or dichotomy of interacting with students and others versus interacting with computers. In fact, using a computer doesn't necessarily mean that students are not interacting with others. Anderson et al. wrote,

When students are in the laboratory, they are working one-on-one with the machines, but that hardly means they are working in isolation. There is a constant banter of conversation going on in the classroom in which different students compare their progress and help one another. ... An effective teacher is quite active in such a classroom, circulating about the class and providing help to students who cannot get the help they need from either the tutor or their peers. (p. 200)

In addition, it would seem to be useful for students to have such a program at home when they are alone, according to Anderson and Schunn, because of "difficulties of [self-]generation and dangers of misgeneration." In other words, much time can be wasted in writing to others and also mislearned, with respect to learning specific aspects of writing.

Meaning and Value
As noted above, Reid's thrust is on the "meaning and value" of student writing. However, meaning and value shouldn't be limited to writing to people. It's interesting that just as we don't question architects using computers to aid in creating aesthetically pleasing buildings, neither do we question coaches who have their players practice drills over and over and over to perfect their skills. No one says, These drills don't have meaning. And no one asks, Why don't you just let them play games that have meaning instead of mindless drills? No one does because it's understood that honing one's skills is valuable for playing the game well. And skills like coherence are crucial to writing well.

Meaning and value are relative. What meaning and value do videogames have? Isn't it primarily just for pleasure, part of which derives from improving one's skills. And for that pleasure, people, especially youngsters, can play for hours on end, as can athletes. Supposedly, ex-NBA star Larry Bird felt shooting "200 free throws before school, every day" had meaning and value. From the article, Jenkins' students apparently found the writing tutor meaningful and valuable, as indicated by their improvement in writing:

Jenkins suspects that English language learners (ELL)—educationese for children who speak another language at home—may be among those who can benefit the most from using writing-instruction software. Last year, 92% of his ELL students passed the writing portion of the state assessment test, he says, compared with 31% of his ELL students before he started using the software. That percentage is also well above the statewide ELL rate of 58%.

That's a tremendous difference. Of course, there is a danger of limiting writing to what a standardized test can measure, and of dumbing down instruction, which is well-documented in George Hillocks' book The Testing Trap: How State Writing Assessments Control Learning.

Having said that, although writing with a purpose to others, just like practice, can help to improve one's writing, such an approach has its limits. And to move beyond those limits requires studied practice (see The Expert Mind in Scientific American). And if some computer program can help in that regard, great!

Motivation
As noted above, cognitive tutors, if designed appropriately, can motivate students to spend more time on task, which is the most important factor in learning. Anderson et al. wrote,

Students' own attitudes to the tutor classrooms are quite positive, to the point of creating minor discipline problems. Students skip other classes to do extra work on the tutor, refuse to leave the class when the period is over, and come in early.

How often does that happen in our classes? Students coming in early, not wanting to leave at the period's end, and preferring to do our homework instead of others'?

In their conclusion, Anderson et al. mention an anecdote:

The student, frustrated by restrictive access to the LISP tutor, deliberately induced a 2-day suspension by swearing at a teacher. He used those 2 days to dial into the school computer from his home and complete the lesson material on the LISP tutor. (p. 204)

And the Science article says that Jenkins found similar results with his students:

Maria had more confidence in her writing abilities--and passed the writing portion of the state assessment test. "It's not a cure-all, but what a difference it's made in what the kids have shown they can do," says Jenkins, who began using the software last year.

As Anderson et al. assert, "learning achievement is a very empowering experience," and one that has "meaning and value" to the students.

Values
So, why wouldn't compositionists applaud the use of computers as tutors? Asao Inoue, in his review of the book Machine Scoring of Student Essays: Truth and Consequences stated,

More importantly, most in the present collection do not acknowledge or address (accept [sic] arguably Haswell, Anson, and Broad) a core premise of the book, that what is at issue is a paradox of technology. We already use and need technologies of assessment, yet we are fighting against certain kinds of technologies because they take us in different directions, shape our practices, assumptions, student arrangements, and working conditions in ways we do not value enough to pursue.

This particular technology is too quickly dismissed. Not because it may not work but because present practices and assumptions have attained canonical status rather than being critically re-examined. Of course, we shouldn't uncritically accept new technology, either. But if it meets my values of motivating students to work on their writing and actually helps to improve their writing, then I'm interested in learning more about it.



Two recent articles are asserting that traditional methods of certifying and selecting teachers do not work well and that alternative methods may help.

The 'Certified' Teacher Myth

Like all unions, teachers unions have a vested interest in restricting the labor supply to reduce job competition. Traditional state certification rules help to limit the supply of "certified" teachers. But a new study suggests that such requirements also hinder student learning.

Harvard researchers Paul Peterson and Daniel Nadler compared states that have genuine alternative certification with those that have it in name only. And they found that between 2003 and 2007 students in states with a real alternative pathway to teaching gained more on the National Assessment of Educational Progress (a federal standardized test) than did students in other states.

The authors conclude that strict certification standards hinder teaching competence. It's not clear why doing so would achieve that effect. Yet, as mentioned in What Works in Teaching, one study found that TFA teachers outperformed experienced, certified teachers. And another recent study found a somewhat similar finding: Alternative route teachers who took an intensive course on teaching outperformed experienced, traditionally certified teachers in some subjects (not all), with math again having the greatest differences.

The authors state that those states with genuine alternative certification have more minorities teaching, and assert that minority students benefit from having minority teachers. I'm guessing again, but what would make sense to me is that alternative route teachers have experience in their subject matter that enhances their instruction. Even so, the results of these intensive courses call into question the present methods in schools of education of preparing teachers for the classroom.


Most Likely to Succeed: How do we hire when we can't tell who's right for the job?

This article (via Stephen Downes) compares selecting future teachers to predicting who will become a star quarterback in the NFL. With respect to the NFL, prediction has more failures than sucesses. Yet they have easily identifiable criteria for selection, years of statistics to refer to from when the player was in high school, and then in college, and videos of their performance over time on the field. In contrast, for future teachers, the criteria are more vague, there are no years of statistics, and no videos of their performance over time. However, even if there were such evidence, we still wouldn't be able to predict who would be a good teacher any more than they can pick a quarterback:

The problem with picking quarterbacks is that Chase Daniel’s performance can’t be predicted. The job he’s being groomed for is so particular and specialized that there is no way to know who will succeed at it and who won’t. In fact, Berri and Simmons found no connection between where a quarterback was taken in the draft—that is, how highly he was rated on the basis of his college performance—and how well he played in the pros.

Of course, the difference between good teachers and not-so-good teachers has implications for what students learn:

Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of a very good teacher will learn a year and a half’s worth of material. That difference amounts to a year’s worth of learning in a single year. Teacher effects dwarf school effects: your child is actually better off in a “bad” school with an excellent teacher than in an excellent school with a bad teacher. Teacher effects are also much stronger than class-size effects. You’d have to cut the average class almost in half to get the same boost that you’d get if you switched from an average teacher to a teacher in the eighty-fifth percentile. And remember that a good teacher costs as much as an average one, whereas halving class size would require that you build twice as many classrooms and hire twice as many teachers.

And certification and degree level doesn't make a difference in teaching quality, either:

A group of researchers—Thomas J. Kane, an economist at Harvard’s school of education; Douglas Staiger, an economist at Dartmouth; and Robert Gordon, a policy analyst at the Center for American Progress—have investigated whether it helps to have a teacher who has earned a teaching certification or a master’s degree. Both are expensive, time-consuming credentials that almost every district expects teachers to acquire; neither makes a difference in the classroom. Test scores, graduate degrees, and certifications—as much as they appear related to teaching prowess—turn out to be about as useful in predicting success as having a quarterback throw footballs into a bunch of garbage cans.

That graduate degrees have an effect on teaching ability seems to call into question an earlier post stating that a thorough knowledge of subject matter was one characteristic of outstanding college teachers. But not necessarily. We would need to see what sorts of graduate degrees are being considered, whether there is a difference between a masters degree in education and one in the subject matter. And also how well one did in the graduate level subject matter courses.

From another perspective, I'm reminded of my first year teaching English in Istanbul to students admitted into Marmara University, an English-medium institution. Before they took courses in their majors, they had to take an intense, six-hour-a-day course for eight months to learn English. It's really not possible, but that's what the students had to do. Anyway, I had just finished my master's in Teaching English as a Second Language (ESL) (plus I was certified in science and biology at the secondary level) in which I was introduced to a variety of theoretical courses, including a few that covered methods of teaching ESL. But no practice. I found myself flying by the seat of my pants, using very little of my graduate education. Apparently, education separated from contextualized practice is of little help, and soon forgotten.

Actually, it makes quite a bit of sense. Doctors have four years of medical education, and then at least three years of intense internship under the supervision of experienced doctors—specialists considerably more. Would anyone really want to undergo an operation by a doctor who knew the book procedures backwards and forwards but had no experience in surgery? Engineers, after graduating, go into the workplace surrounded by more "practiced" engineers and learn through a combination of doing, observing, collaborating, and being supervised. And so on for other disciplines. But teachers, after their education, which although it includes a semester or two internship, go into the classroom doing alone—generally not observing other teachers or team teaching—and receiving limited supervision.

Learning follows a power-law relationship:
Anderson and Schunn in their article "The implications of the ACT-R learning theory: No magic bullets" (pdf) state that there are three learning processes governed by power laws:

1. Power Law of Learning. As a particular skill is practiced there is a gradual and systematic improvement in performance which corresponds to a power law. ...

2. Power Law of Forgetting. As time passes performance degrades, also according to a power function. ...

3. Multiplicative Effect of Practice and Retention. Most important, the Base-Level Equation implies a relationship between the combined variables of amount of practice and duration over which the information must be maintained. ...

This implies performance continuously improves with practice ... and continuously degrades with retention interval .... Most significantly the two factors multiply which means that increasing practice is a way to preserve the knowledge from the ravages of time.

Naturally, learning and practice need to be on target, as Albert Ip comments:

My daughter's swimming coach puts it very well: "Practice makes your stroke permanent. If you practise bad technique, you just become a more efficient bad swimmer with the bad stroke. It is even more difficult to unlearn the bad strokes."

With that caveat in mind, it's obvious that doctors and engineers follow up their book education with considerable practice in the presence of others, observing others, and receiving feedback from supervisors who see their work on a frequent basis. Other factors being equal, their environment supports learning, practice, and retention. Teachers, on the other hand, generally work alone in an environment that doesn't support collaboration, frequent feedback, or observation of others. Even if their education courses were terrific, the Power Law of Forgetting ensures that the content of all but the most recent ones is likely to be forgotten. It certainly was in my case in Turkey. And what if they don't forget, are they implementing it correctly? Or practicing "bad technique"? Without targeted feedback, they may simply become "more efficient bad" teachers.

As opposed to credentials, the most important element in good teaching, according to this article, was feedback:

Of all the teacher elements analyzed by the Virginia group, feedback—a direct, personal response by a teacher to a specific statement by a student—seems to be most closely linked to academic success. ... [Not simply] "Yes-no feedback ... which provides almost no information to the kid in terms of learning."

In quite a few ways, the necessity of feedback, especially immediate feedback, makes sense (although see Harold Jarche's post noting the importance of the when and how of feedback). It's necessary for flow to take place, and it's an important part of developing procedural knowledge (according to ACT-R Theory). However, the ability to give appropriate and immediate feedback in the classroom cannot be measured before one begins to teach—thus, the problem in ascertaining who will be "good" teachers on the basis of credentials. Perhaps what is needed is ongoing professional development that focuses on giving feedback. As Downes comments,

there seems to be nothing that prevents us from either teaching these strategies to new teachers, or evaluating them in teachers put up for tenure.

Perhaps instead of taking two years of education courses, students might replace them with

  • one more year of subject matter courses,
  • a one-year internship in a work environment appropriate to their major, and
  • an intensive summer course right before teaching.

Once teaching, they would receive

  • a year of close mentoring with respect to feedback and other elements in that course, thus contextualizing their education and not letting it be forgotten,
  • professional development that includes ongoing feedback and collaboration throughout the school year, and
  • professoinal internships in their discipline either during the summer or perhaps a semester internship every four or so years.

Of course, I'm just speculating. But the fact that alternative route teachers can outperform experienced traditional route teachers, especially in math and the sciences, indicates that, at the least, we need to understand

  • why alternative route teachers who undergo these particular training programs are outperforming experienced teachers in some fields and
  • how traditional teacher training can be improved.

Somewhat related posts:
Just-in-time Learning
Engagement and Flow
Learning with Examples



iTunes University continues to grow. According to Apple's website, it has

over 75,000 educational audio and video files from top universities, museums and public media organizations from around the world.

Its latest addition is Edutopia: What Works in Public Education sponsored by the George Lucas Educational Foundation with podcasts ranging from Technology Integration to Assessment to Project Learning and more.

It also a variety of language learning podcasts, a few of which are Greek, Hebrew, Chinese, Japanese, ... , and, of course, English.

And it has various podcasts on writing, including podcasts from

Related post:
The Web: The Future of Learning



The 42nd Annual TESOL Conference (2008) is coming up soon, April 2-5, in New York City. Thursday afternoon, I'll be presenting along with three others on assessing writing . If you're coming to the conference and interested in assessing writing, here's a breakdown of what we'll be talking about.

Self-assessment
I'll be looking at how to help students in higher education learn to evaluate their writing, reflect on their writing, and take appropriate measures to improve their writing by

  • embedding assessment in the course objectives,
  • providing transparency in evaluative criteria, and
  • considering both product and process.

Basically, having students use the instructor's criteria for assessment gets them thinking in those terms, seeing more clearly course expectations, and hopefully giving them an understanding of assessment they can take with them after leaving our classrooms.

Multi-trait rubrics
John Liang will review a multi-trait rubric that assesses basic academic writing skills of incoming international graduate students in an MA TESOL program. Based on previous years’ assessment results, the rubric focuses on select component skills of academic writing (ability to comprehend the prompt, development of the argument, organization, grammar skills) instead of overall academic writing proficiency.

Techniques of assessment
Tim Grove provides a survey of techniques used to assess writing, including methods that minimize grading time, while remaining valid and reliable. He will examine rubrics, general comment sheets, error counting, error classification, personalized grading plans, Grade Negotiation, and even Rapaport’s “Triage Theory of Grading.” 

Online and holistic assessment
Tim Collins will review strengths and weaknesses of online and holistic assessment of writing, now frequently used on high-stakes assessments, and provide ideas on how instructors can prepare learners for success on these assessments.

In all of these, we make certain assumptions. Assessment

  • should reflect objectives,
  • be transparent to students,
  • be fair and effective,
  • provide feedback to students and teachers, and
  • enable learners to self-assess and take responsibility for their learning.



Yesterday, I attended the Spilman Symposium on Issues in Teaching Writing at Virginia Military Institute. This year there were three keynote speakers: Leila Christenbury, Edward White, and John Schilb. I'll summarize their talks one a week.

The first speaker was Leila Christenbury, Professor of English Education at Virginia Commonwealth University. Her topic was "Conflict and contradictions: the perspective of high school teachers on college level writing."

She said that although elementary and middle schools had changed in the U.S., high school "remains one of the most unchanged structures and institutions in American society" with unchanged curricula. The canon of literature remains stable: Romeo and Juliet, Huckleberry Finn, The Most Dangerous Game, a strong resistance to digital literacy ... The content of the high school literature curriculum is very traditional.

Although attempts at change have been made, most of them "have foundered or never achieved a foothold."

Despite these failures, high schools are moving more to a college model (with debate on this move) through three different ways:

  • AP courses
  • Dual enrollment, in which high school courses are receiving college credit at the same time.
  • Shortening the four traditional high school years into three, eliminating the 12th grade.

Interestingly, NCTE was founded by teachers in 1911 due to the unfair influence of colleges, especially Harvard, on high school curricula.

Compared to the teaching of literature, writing has been an exception. The notion of writing as a process, as opposed to only product, has entered the classroom due to the influence of the National Writing Project. In summer institutes, teachers write, learning that there is a disconnect between what they do as writers and what they tell their students. Christenbury added, however, that one problem with the writing process in high school is that it has become fossilized into a lockstep, hierarchical, immutable process instead of being recursive and fluid.

In her own work, she has found that high school teachers believe incorrectly that college writing

  1. centers on the research paper
  2. doesn't allow personal pronouns
  3. concentrates on usage errors (a handful of usage errors will fail a paper)

On #1, at Kean, there is a second year course that focuses on the research paper, and a senior captstone course that I believe includes the research paper. It's not likely that other courses focus on a research paper, although they may include one.

On #2, I haven't surveyed the professors here, but it's been my impression that allowing personal pronouns is more of an English Department phenomenon, perhaps crossing over to similar disciplines like communications, but it would be unusual for the sciences and business to allow personal pronouns. Not that it doesn't happen. Consider Watson and Crick's seminal paper (pdf) on the structure of DNA.

Christenbury added that writing today, however, is more difficult for high school teachers because of

  1. prompt driven writing samples (for 8th and 11 graders in VA)
  2. The 2003 College Board report, which called writing at the high school level the neglected 'r. The amount of time for writing should double and teachers should step up their game. It also called for assessment of writing with an SAT 25-minute writing sample.

Such writing tests are high stakes, single prompt, and short time framed, making it difficult to use a full writing process. So high school teachers face contradictions: preparing students for college-level writing and also for these standardized tests.

She also conduced research with 23 high school English teachers in the Richmond, Virginia area. These teachers were almost all honors English teachers, 12 had 15 or more years of teaching experience, 4 had 7-10 years, 17 had master's degrees, one a JD degree, 1/4 were members of NCTE or other professional groups, 1/2 had finished nwp summer institutes, and 1 was Board-certified.

Here are some of their thoughts on college-level writing: It

  • involves more technical writing
  • should move students to think more about their writing and work their way out of the box
  • is more intentional and exhibits clear prose
  • shows insight and synthesis
  • has no chance for revision
  • has more chances for help and services
  • develops ideas and elaboration
  • etc.

As Christenbury noted, high school teachers have an inaccurate perception of college writing, somewhat inflating what is actually done at the university level and somewhat being incorrect (e.g., not allowing revision).

With respect to high school writing, they listed the following characteristics:

  • correctness
  • used for end of course tests
  • retelling facts
  • summarizes
  • shows organization
  • exhibiting survace level correctness
  • based on personal experience, not fact

Obviously, as Christenbury stated, what high school teachers can do is negatively affected by timed graded essays. (George Hillocks' book, The Testing Trap, is thorough in showing how standardized essay tests have deteriorated writing instruction in U.S. public schools.) It's also eroded by the number of students they have. In the discussion that followed her talk, one high school teacher said he had 100 students and another former high school teacher said she had 147 at one time. It's easy to imagine that the combination of (1) many students, (2) teaching literature in addition to writing, and (3) needing to focus on state and national timed essays doesn't allow much time for providing feedback to students to help them develop their writing.

This is a problem that won't fade away. Technology via connecting students online can help, but it's not a panacea. That is, having students write online and interact with classmates and others online can provide the feedback and critiquing that leads to better writing. However, students also have a problem of time: Developing writing takes time, and writing is not the only item they must focus on. For writing to develop, it should be part and parcel of the majority of their courses--not only in high school but also in college. Easier wished for than done.



Here are the links to my series of posts on Turnitin, plus a list of plagiarism resources and two others related to reasoning:

Related readings on Turnitin, plagiarism, and intellectual property:

Update:

  • "Before Models Can Turn Around, Knockoffs Fly"
  • A debate is raging in the American fashion industry over such designs. Copying, which has always existed in fashion, has become so pervasive in the Internet era it is now the No. 1 priority of the Council of Fashion Designers of America, which is lobbying Congress to extend copyright protection to clothing.



I just listened to an interesting session at Computers and Writing 2007 on the role of feedback and assessment in first-year composition. Fred Kemp, Ron Baltasor, Christy Desmet, and Mike Palmquist talked about how they used online learning environments as sites for assessing learning and teaching.

Fred Kemp talked about Texas Tech University's ICON system in which

  • class time is cut in half,
  • assignments are doubled or tripled,
  • all relevant interactions are online,
  • students meet in a classroom once a week to support those interactions, and
  • grading and commentary are anonymous with two readers on drafts.

This particular system helps to make the composition program an adaptive, feedback system that gains knowledge over time and is not dependent on rotating faculty and program directors. The data collection that is built into the system has shown that some assignments generate better grades than others, thus indicating where to make changes in the program. For instance, pulling back from having intensive peer reviews (12-13 a semester) has shown a decrease in students' GPA, suggesting that their writing has worsened. Next year, they're reinstating the peer reviews, and if the GPA increases, then there will be a strong correlation for the effect of intensive peer reviews on learning to write.

Mike Palmquist talked about Colorado State University's Writing Studio, a combination instructional writing environment and online course management system. As in ICON, the system collects data on how people are using the site by tracking their activity as they log in, which can give guide the program on which areas need to be strengthened, or vice versa. One question to be answered is, "How does technology shape the teaching and learning in writing courses?"

Ron Baltasor and Christy Desmet talked about the University of Georgia's emma system that embeds meta-data via markup in documents that are uploaded to the server. They have three ongoing projects that look at errors, revision, and citations. One finding from the citation project was that good library instruction works best in conjunction with instructor prompts for citations, but that library instruction alone showed no improvement.

Although the three universities have different approaches, they all show the value of electronic systems that can provide feedback to programs for improving instruction and composition programs.



Shari Wilson ("Ignorance of the Ignorant", Inside Higher Ed) writes about students' incompetence in judging their performance level:

My undergraduate students can’t accurately predict their academic performance or skill levels. Earlier in the semester, a writing assignment on study styles revealed that 14 percent of my undergraduate English composition students considered themselves “overachievers.” Not one of those students was receiving an A in my course by midterm. Fifty percent were receiving a C, another third was receiving B’s and the remainder had earned failing grades by midterm. One student wrote, “overachievers like myself began a long time ago.” She received a 70 percent on her first paper and a low C at midterm.

...

Dozens of colleagues have told me that their undergraduates simply do not have the tools to criticize and evaluate their own work-much less predict how well they will do on assignments. What’s behind this great drop in ability to assess performance?

What are the causes? According to Wilson, they are many, including low high school standards, helicopter parenting, multi-tasking with email and the internet while studying, and so on. Note that higher ed assumes that (1) the purpose of public schools is to prepare students for college, (2) none of this is higher ed's fault, and (3) the students today aren't as good as those yesterday.

Such simple simplifying seems less than satisfying in understanding a phenomenon impacted by a variety of influences. One influence not mentioned is, I believe, the greater expectations of professors and universities over time. Biology courses, for instance, continually increase the amount of information to be learned in the same amount of time. (Just compare textbooks between today and 30 years ago for the same course.) At the University of Texas at Austin, the biology department finally woke up (in the 90s?) and changed a 3-credit microbiology course to three 2-credit courses, doubling the amount of time needed for the "same" material. The 2-credit course I took was still jam packed with information.

Being embedded in the system, professors are often unaware that they are requiring more than was required of them as undergraduates because changes increment slowly over the years. It's also likely that their memory has reconstructed their memories of when they were undergraduates in line with their own academic cultural expectations in a manner similar to Bartlett's experiment in having British citizens recount a Native American story "The War of the Ghosts":

Bartlett's readers (typically unconsciously) made the story more orderly and coherent within their own cultural framework.

I don't doubt that there are differences in student populations. My students are surprised that I expect 2-3 hours of outside study for every one hour of class time. Wilson doesn't simply bemoan the situation, however. She lays out ways to improve our instruction:

As an instructor of undergraduate core classes, however, I realize that my responsibility does not stop at content. I cannot simply list assessment as a course objective and then feign ignorance when my students show me again and again that they cannot predict their own performance. Strategies — not only for instruction, but also for exercises and assessment — are integral in setting my students on the right path for the remainder of their college careers. To accomplish this, I realize that I will need to work much, much harder to help my undergraduates understand assignments and expectations, rubrics and assessments, in-class grades and the prediction of success.

Some is already in place. Like many English composition instructors, I do instill a peer-editing component to my writing courses — not only to help students view writing as a process — but to give them some tools and much-needed experience in evaluating student work. I provide instruction in how to apply rubrics to student work and often use past student work as “models.” Some students are glad for the transparency of my courses; with a detailed 16-week course outline given out at the first class, they can start relating course objectives to specific assignments throughout the semester. Lessons scaffold one on another; assessment follows thorough instruction. Still, there is much to be done. It’s clear that I need to develop more tools to help my students learn to assess their own work and predict academic performance more accurately.

Along with the interaction of peer-editing, having much of their work online can aid in seeing, comparing, and contrasting their own work with others. In the past, I had my students use Blogger.com. This semester, I moved to Bloglines. Posting and reading posts in one place makes it easier for them to become more aware of how well they are doing. In addition, they now have access to all comments made on others' posts, unlike with Blogger, so that the amount of reading interaction has increased compared to previous semesters. One key to accurate self-assessment is being exposed to what one's peers are doing, an exposure facilitated by blogging.



As Andrew Goldenkranz, Principal of Pacific Collegiate School in Santa Cruz, California, says in an interview,

NCLB has been damaging in practice, even though I think it was not a bad idea in principle.

Goldenkranz should know as this year he's losing Jefferds Huyck, a teacher who has a doctorate from Harvard in classics, 22 years of teaching experience, and 16 students who won honors in a nationwide Latin exam (Freedman). Why? Because he doesn't have a teaching certificate.

Any idea without flexibility, like the NCLB in this case, can create more problems than it solves. And teaching without flexibility can stifle learning and ability, too. I remember while in the 7th grade, my math teacher forced me to show how I solved my long division problems. I had simply been doing the operations in my mind and writing down the answers. Not believing I could do divisions involving 3-figure divisors, she had me demonstrate. Although I did demonstrate several problems for her, she decided that it was more important to follow her rules. Within a few years, that ability had evaporated. So, what rules do we enforce that we could be more flexible about?



Jay Mathews (Washington Post) writes that Confidence in math doesn't always equal success. Reporting on a study from the Brookings Institution, he writes,

countries such as the United States that embrace self-esteem, joy and real-world relevance in learning mathematics are lagging behind others that do not promote all that self-regard.

Mathews includes pro and con perspectives on this report. Of course, confidence based on a lack of reality doesn't bode well for success in one's life. Some time ago, I remember reading about a study that showed that competent people usually have less confidence than incompetent folks, at least initially, that their ability to do something is better than the average in the room.

In foreign language (and other) education circles, we do our best to make the classroom a safe haven for students and try to relate the classroom learning to their own lives. It's possible that some work harder at making everyone feel good than at learning. Even so, it's hard to see why having fun and making things relevant would reduce learning. The only factor I can think of is that countries that focus on lots of drills will do better on a test that reflects that type of learning. Those scores say little about whether students can employ those skills outside of the classroom. As Mathews cites Gerald Bracy, an educational psychologist as saying,

the report overlooked countervailing trends in Japan, Singapore and other countries that do better than the United States on eighth-grade math tests. Officials in those countries say their education systems are not yielding graduates who have the same level of creativity as American graduates. Some Asian nations have begun to copy aspects of U.S. education, including the emphasis on letting students search for answers rather than memorize them.

Still, it is important for our students to have an accurate sense of how well they are doing and how they can improve their abilities in various areas. Self-assessment and peer assessment, along with seeing their peers work, can help in this regard. For a portfolio system that includes these aspects, check out The Learning Record.



David Warlick at 2 Cents Worth (via Will Richardson who comments on David's post) provides a starting point for assessing blog posts with two sets of five questions in assessing blogs, one for the blogger and one for the reader. The blogger questions are:

- What did you read in order to write this blog entry?

- What do you think is important about your blog entry?

- What are both sides of your issue?

- What do you want your readers to know, believe, or do?

- What else do you need to say?

With just a little rephrasing, the reader questions become:

- What did the blogger read before writing?

- What was important about the blog entry?

- What were both sides of the issue?

- What do you know, believe, or want to do after reading the blog?

- What else needs to be said?

I like these questions because they provide feedback to students that help them consider, as David says, "broader aspects of the issues being written about." And I especially like the one about reading. Too often, students expect to write only from their own experience without reading, without understanding others' perspectives, without weaving those perspectives into their writing. However, I would change that question to:

- What are the different sides in this issue?

This rephrasing moves students from an "either-or," "us-them" mentality to a more nuanced picture fitting the complex reality of life.



i just came across two sites giving good advice on how to use (and not use) blogs in the classroom. James Farmer has two posts, one on how to use blogs and another on how not to use them. And Doug of Borderland comments on Farmer's posts.

On how not to use blogs in education, Farmer's main points (my summary of his summary of his paper "Blogs @ Anywhere: High fidelity online communication") are:

  • Don't use

    • blogs as "discussion boards, listservs or learning management systems"
    • group blogs
    • blogs for something they're not made for
  • And don't forget RSS

On how to use blogs in education, the main points are to use:

  • blogs "as key, task driven, elements of your course" (that is, provide structure and purpose)
  • assessment that promotes, or at least allows, personal pursuits and expression
  • blogs for what they are good for
  • blogging tools that work (Farmer covers 9 major multi-user blogging tools here.)

On not using group blogs or blogs as discussion boards, etc., at the university level, Barbara Ganning has a different perspective. See her BlogTalk paper, "Blogging as a Dynamic, Transformative Medium in an American Liberal Arts Classroom", in which she discusses her use of blogs in the classroom, including a class blog that ties together students' individual blogs, communication, and class management.

Doug supports Farmer's main points with his own experience, although noting that more centralized management systems are appropriate for younger learners. Along these lines, he notes the need for more conversation on using blogs in elementary schools, giving several examples, one of which is more teacher oversight at the lower levels:

Mainly, younger kids have a very different notion about private vs. public information. I know this is an issue for all students, but younger kids have a harder time recognizing personal boundaries. A kindergartener, for instance, would be far more likely to tell her classmates that her mother is in jail than would a 5th grader, for instance.

It makes sense to use technology for what it does well and also to take into consideration the age and background of the students. Not paying attention to this point may result in little impact on students' involvement or learning, as Farmer, based on his reading of others' use of blogs in education, asserts in his paper:

While the resulting feedback indicated a degree of satisfaction and no objection to the use of blogs, there was little to indicate any significant shift in student perceptions and activity in the learning environments. While it is beyond the scope of this examination to argue hard and fast rules, this could be attributed, along with other factors such as the nature of assessment, to the use of blogs as collaborative areas without the use of aggregation.

There are quite a few comments on Farmer's pages, indicating that the environment affects the implementation of Farmer's guidelines. With respect to foreign language learners, in particular, we need to be careful. Still, let me emphasize Farmer's point on keeping RSS in students' minds. As he says,

Ignore RSS at your peril: Probably the biggest mistake that adopters tend to make is to ignore RSS or just through it a casting glance. The problem is that these people aren’t bloggers and just don’t understand. Without RSS blogs would pretty much just be extensions of geocities pages. Your learners are NEVER going to surf each others sites everyday and the majority of them won’t even go to that funky web-based aggregator you set-up.

RSS, or news, feeds are like subscribing to a newspaper or magazine: it comes to you instead of you going to the corner store to buy a copy. Why use news feeds? Well, mainly (1) to save time and (2) to be exposed to a variety of opinions. More concretely on time, you, and your students, can subscribe to all of the class blogs and other blogs of interest so that instead of clicking on 10, 20, or more different sites, all new posts are aggregated at one's own site (and perhaps another aggregation at a single class site). On the latter reason, you and your students can create search feeds for news groups and news (via Google News or Yahoo News) and for websites and blogs that can keep a current flow of information on topics related to class studies, projects, or personal interests. Participating in knowledge networks is crucial for students to develop an awareness of audience, competing values, and diverse perspectives, which, in turn, is essential for learning to write thoughtful and complex responses to and essays on an issue.

For more info on news feeds, see my brief introduction here. For an introduction on possibilities in higher education, go here, and for different RSS platforms, read "RSS readers: best of breed picks". And, again, be read Farmer's article.As Farmer notes,

The development of knowledge through learning to self-publish and comment on postings that adhere to the protocols and norms of behaviour in the chosen communication network is expected to enhance the learners’ reflective, meta-cognitive and written skills as well as management of their learning.

In a nutshell, the combination of blog writing and news feeds helps connect students to one another and to others outside the classroom, creating networks of learning that promote reading, writing, and critical thinking.



Last September, Inside Higher Ed had an article on Kentucky outsourcing grading in its community college ("Outsourced Grading"). Now Lynn Thompson reports on its occurrence at high schools in Seattle ("School districts turn to paid readers for grading student essays", Seattle Times):

In the Northshore School District, some English teachers don't spend much time reading student papers.

In the Bellevue School District, some don't even grade the papers.

Both districts now rely on paid readers to evaluate and in some cases grade student essays in English classes; Seattle's Garfield High School is piloting such a program this year. The use of readers greatly reduces teacher workload and gives students more writing practice, but the trend raises questions about teachers' roles in inspiring and guiding students' work.

Although feedback can guide students' work, it's not clear how giving grades inspires or guides their work, unless one is considering them as reality checks. The real question, as noted by Stephen Miller, president of the Bellevue Education Association, is:

"All English teachers would agree that students become better writers by writing more. But is writing many essays more important than personal feedback from your teacher? We don't know the answer," he said.

But even this question assumes teacher feedback to be more personal than that of an outside reader. My questions would be, Is the feedback from the teacher significantly better than the outside reader's? Is feedback from the teacher on 1-2 essays more effective than an outside reader's feedback on 7 essays?

Much of the response against outsourcing reading and grading seems to be some sort of out-of-touch-with-reality smokescreen. All agree that high school teachers simply do not have the time to read, comment, and grade more than one or two essays a semester when teaching five 30-student-classes a day. Yet, Carol Jago, co-director of the California Reading and Literature Project, asserts,

What's lost is how teachers get to know their students through their writing. And students no longer know the audience they're writing for.

Most compositionists argue that one problem is students always writing for the teacher rather than a real audience. So, it's not clear how moving away from an undesired audience, the teacher, to an unknown one, is much worse. As far as getting to know students through their writing, is it really possible through the apparent limit in high school of one or two essays? More importantly, how do teachers keep informed about their students' writing?

According to Lance Balla, a curriculum and technology coach for the Bellevue schools,

the district built into the program several checks to keep teachers informed about their students' work. The teachers develop a scoring guide for each assignment and read three out of every 30 essays. Readers and teachers consult after each set of papers is graded, and teachers are expected to use the readers' comments to look for common problems and if necessary, adjust their teaching.

I'm not sure how well this works, but I do like the idea of adjusting teaching according to outside feedback. When teachers are the only ones commenting, there is no potential dissonance to help move teachers to reconsider their approach to writing instruction. In addition, the extra time from not grading can perhaps be applied toward those students who need the most help.

From the Inside Higher Ed article, Douglas Hesse, board chair of the Conference on College Composition and Communication, and professor English at Illinois State University, argues against outsourcing grading, saying that

grading was not a function that should in any way be removed from the faculty members. The process of reading a paper and evaluating it, Hesse said, is crucial not only for assigning a grade, but for thinking about how to work with a given student, for evaluating whether certain assignments are achieving their goals, for revising lecture plans, and more.

Hesse's points make sense to me at the college level, although I imagine that not all professors take the extra time to work with students and re-evaluate their pedagogy. For those who don't, it just might be a waste of money to pay other readers and graders. For those who do, reading and grading would seem to be good channels of feedback. I'm not sure that we should simply assume, however, that this feedback should be considered sacrosanct. I'm listened to experienced composition instructors who suggest ways of limiting the time for grading and commenting on papers to 15 minutes. I wonder how effective 15 minutes of feedback can be for the student, or for the instructor. Would doubling the amount of time significantly improve the effectiveness of the feedback? Perhaps not. Perhaps 15 minutes is enough to set students to moving in the general direction of better writing.

I suppose I have more questions than answers on outsourcing grading. But as my previous posting on "Learning takes place in an ecology" implies, grading takes place in an ecology, and what was appropriate at one time may require re-inventing to remain relevant to students, teachers, educational institutions, and the communities in which they are embedded.



On Tuesday at the NJTESOL-NJBE Spring Conference, I presented on different pedagogical strategies for helping English language learners improve the grammar in their writing.

After I brought up the importance of hedging in academic writing, one participant stated that in high school, they taught students to take a position and argue for it strongly rather than allow for any uncertainty or for the possibility of other positions having some validity. I imagine that state testing requirements lead naturally to this style of writing. However, it creates problems for students when they enter the university. Although I'm not against testing or accountability, such a situation shows that standardized testing has a strong influence on pedagogy and also that influence is not always a desirable one. As I mentioned in another post, "Let us make education in our image, says business",

present methods to measure accountability end up in dumbing down instruction and damaging student learning, as shown clearly in George Hillocks' The Testing Trap: How State Writing Assessments Control Learning, and that disturbs me.



In the New York Times, Sam Dillon reports on the ponderings of a higher education commission inPanel Considers Revamping College Aid and Accrediting". (To read the commission's reports, go here.) This is the same panel that has also considered introducing standardized testing into higher education. The panel is calling for more accountability in higher education and in the process attempting to remake education into a business image:

Charles Miller, a business executive who is the commission's chairman, wrote in a memorandum recently to the 18 other members that he saw a developing consensus over the need for more accountability in higher education.

"What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats," Mr. Miller wrote, adding that student learning was a main component that should be measured.

Accountability is important, but the question is how to achieve it. Business doesn't have "a nationwide system for comparative performance." Of course, business has the pass/fail, or success/go-out-of-business model. Education doesn't have that "survival" accountability, although we're moving in that direction as state funding becomes less and less. Even so, would "accountability measures" be cost-effective? Just for a comparison, many businesses now are complaining about the Sarbanes-Oxley Act, which implements better internal controls over financial reporting, as being too expensive. Jill D'Aquila in her article "Tallying the costs of the Sarbanes-Oxley Act" writes:

The survey also reveals that total costs of first-year compliance with section 404 could exceed $4.6 million for each of the largest U.S. companies (companies with over $5 billion in revenues). Medium-sized and smaller companies will also incur significant additional costs to comply with section 404, the survey finding an average projected cost of almost $2 million. Interestingly, the projected costs are higher than originally anticipated based on an FEI survey conducted the previous year.

... the number of senior executives describing SOA compliance as costly had nearly doubled since its enactment, from 32% to 60%.

Miller also said,

he hoped to build consensus among the panel's 19 members as they work to issue a final report in August. But he expressed impatience with some academics who, he said, seemed resistant to change and oblivious that they could be overwhelmed by increasing costs and other challenges.

"Those who are squawking the loudest are those who have a private place to play and a lot of money, much of which comes from the federal government," Mr. Miller said. "What we hear from the academy is, 'We're the best in the world, give us more money and let us alone.' "

I've heard this complaint from others interested in improving education: Educators are stubborn about change, and they want no outside interference. Actually, business can often be the same. Even so, society has a stake in learning outcomes, and universities cannot be impervious to societal influence.

Elsewhere in the article:

And the commission appears to be fulfilling that mission. In its public meetings, panelists from Wall Street and elsewhere in the business world have criticized academia as failing to meet the educational needs of working adults, stem a slide in the literacy of college graduates and rein in rising costs.

Hmm. So, literacy problems are higher education's fault. I've seen some reports in the media that students are less literate than in years past. Even assuming that it's correct, is it academia's fault? That would mean that these students entered the university at an appropriate level of literacy and in four years lost it? That seems to discount all other societal players in the literacy game. In improving literacy, we need to take an improve-the-system perspective rather than a blame-one-player perspective.

As far as rising costs, that's true of many institutions in our society. Look at medical costs and CEO salaries.

Miller, in the earlier article on standardized testing for higher ed, said:

he would like the commission to agree on the skills college students ought to be learning — like writing, critical thinking and problem solving — and to express that view forcefully. "What happens with reform," he said, "is that it rarely happens overnight, and it rarely happens with a mandate."

It's hard to disagree with wanting students to be able to write, think critically, and solve problems. How can we measure those skills in a meaningful way?

And from Nicholas Donofrio:

Another business leader on the commission, Nicholas Donofrio, an executive vice president at IBM, said he was not a strong supporter of proposals that would increase the government's regulatory role.

"But the government has some role to play because it funds the aid programs, so it has some hooks into them," Mr. Donofrio said. "We want these people in academia to get real about the problems and the issues."

There's a considerable number of accusations here. I'm not sure how justified they are. I'm also not certain that the answers to "the problems and the issues" are clear, nor that the corporate world has any answers. "For airlines, bankruptcy becomes business as usual." Consider also Ford's and GM's slide toward bankruptcy. In fact, "US company bankruptcies may surge this year." And, of course, there are always the CEOs who are paid astronomical sums. For instance, Lee Raymond, the Exxon chief who retired this past year, averaged $144,573 a day over a period of 13 years and received $400 million his last year (Greg Robb, MarketWatch.com). I suppose if universities had the option of going bankrupt, they would be able to "get real about the problems and the issues."

Education is important, and we should strive to improve it. I have no useful suggestions for doing so, just a few thoughts. It's not clear to me why a university should be run like a business, and it's not clear to me why business "experts" believe they have insights into improving university education. Do we hear much about educators advising the corporate world on business problems? It's also not clear to me why (assuming that they are) education "experts" are resistant to outside advice. Why not evaluate the advice rather than its source? I'm not even sure why this post is in my blog. I suppose it's here because I believe that although accountability is important, that present methods to measure accountability end up in dumbing down instruction and damaging student learning, as shown clearly in George Hillocks' The Testing Trap: How State Writing Assessments Control Learning, and that disturbs me. And so does the image of education as a business. I prefer images of learning and civic responsibility.



John Liang, Timothy Grove, Sydney Rice, and I presented papers at TESOL 2006 on the theme of "Moving Toward Self-Assessment in L2 Writing."

John Liang began with an "Overview of Self-Assessment in the Second Language Classroom." His overview handout here (.doc) also has a good bibliography on self-assessment.

Next, I talked about using "Course-Embedded Assessment" (.doc) to help students learn to assess their writing. Generally speaking, course-embedded assessment refers to program- or institution-wide assessment embedded in general education courses in order to focus the curriculum on student learning. In my classes, I've incorporated the program rubric for assessing L2 writing in all aspects of my first-year composition courses--from modeling, using it to guide my feedback, having students use it to guide their feedback to others, and to guide their own self-evaluation--so that it becomes part of their mind-framework for looking at writing rather than remaining fragmented information and forgotten as soon as the semester ends.

Sydney's paper looked at "Focused Self-Assessment" (.doc) presenting three basic steps for students to become self-editors:

1. Provide input and examples of both effective and ineffective language use.
2. Involve students in peer review and peer editing, as well as self-editing.
3. Provide students with the key for productive self-editing.

Her approach uses "methodical and uncomplicated" rubrics, an approach that makes it clear and gives to students the tools for editing and revising their writing. Here are her other handouts (all are .doc): Summary, Overhead figures.

Timothy Grove discussed "Showcase Portfolios" (.doc) for helping students become better self-assessors. When students have to select and present their best work, they begin to learn how to evaluate their work.

John Liang ended the colloquium talking on "Toward a Three-Step Pedagogy for Fostering Self-Assessment in a Second Language Writing Classroom" (.doc). The three steps are:

Stage 1: Extensive teacher modeling
Stage 2: Teacher assessment with guided and independent peer assessment
Stage 3: Peer assessment leading to guided and independent self-assessment

One point John mentioned that occurred in all of our talks was the need for rubrics or something that would give structure to the students as they began to learn to assess their learning.



TESOL has a new interest section: Second Language Writing (SLW-IS). Actually, it was accepted back in July 2005, but it takes time to become active. Still, SLW-IS is growing strong with more than 200 members. Here are excerpts from a message written before the TESOL 2006 conference from its first and now immediate-past chair, Christina Ortmeier-Hooper:

The new SLWIS provides a forum for researchers and educators to discuss and exchange information in the area of second language writing. Specifically, our goals are

∑ to increase awareness of the significance of writing in teaching ESL/EFL

∑ to encourage and support the teaching of writing to ESOL students at all levels

∑ to provide a forum to discuss issues of writing assessment and the placement of second language writers

∑ to disseminate and promote research on second language writing

The hope is that SLWIS will facilitate communication about writing across teaching levels and settings. Recent research on the scope of second language writing scholarship suggests that most of the field’s nationally (within the United States) and internationally circulated scholarship is produced by scholars in postsecondary education at research-intensive institutions. Other contexts for writing (pre-K through 12, 2-year colleges, community programs, international K-12 schools, etc.) often have much larger populations of ELL/EFL writers, but scholars, particularly teacher-researchers, in these settings do not often receive support for researching and writing.

In light of that, the new SLWIS provides us with the opportunity to initiate more research and scholarship in these underrepresented contexts by supporting new collaborations and partnerships across levels and by providing a forum for discussing shared experiences. Indeed, the SLWIS will hopefully bring teachers, teacher-researchers, and second language writing specialists together, from across nations, across institutions, and across grade levels, to discuss the unique needs and concerns of ESL/EFL writers. Along with the Symposium on Second Language Writing and the Conference on College Composition and Communication (CCCC) Committee on Second Language Writing, the SLWIS at TESOL hopes to broaden the scope of L2 writing research and to help teachers and administrators further their understanding of second language writers.



I've been reading up on course-embedded assessment, which I mentioned in an earlier post, and am wondering about the implications for my own classes. As a member of the ESL Program, I use the program's rubric for assessing my students and also to help focus them on areas in which they need improvement. Apparently, however, the rubric's criteria are somewhat elusive for my students. Actually, there are two elements out of the ten that I have to think about, too. If I want my students to better understand criteria of good writing, I've been considering ways to incorporate the rubric in other ways than simply their final grade.

One way I have recently incorporated the rubric is to use it as the basis for my feedback on rough drafts. Another is to have students use it to guide their reviews of classmates' papers. A third way I'm considering is having students write a paragraph to hand in along with a following essay on what aspects of the rubric did they work on to improve the present essay with respect to the previous one.

For students to grasp these criteria that I've been working with for ten plus years, consciously and unconsciously, using them 3-4 times a semester, once per paper is not enough. They need to spend time with them, to reflect on them, and to use them throughout the course on a variety of assignments.

For assessment to be formative, it should be embedded pervasively throughout the course so that the students continually receive feedback and so that they internalize its criteria. Such course-embedded assessment seems common sense to me.



I'm reading an article by Marinara, Vajravelu, and Young on assessing learning in a general education program. With respect to the composition aspect, they include its mission statement:

First-year composition introduces students to the skills necessary for critical literacy. Students will be expected to practice and revise their writing in contexts that mirror tasks they will perform throughout their academic and professional lives.

The mission statement took two months of discussing, arguing, and revising to craft, with one point centering around whether the word "literacy" should be in the statement. The authors don't go into why that point got discussed, but I'm curious, too. Literacy is related to composition, as one needs to critique texts that one uses in one's writing, in fact, to critique one's own writing. However, when crafting a two-sentence mission statement, one might think that the focus would be on writing itself. Although the statement mentions that students will "practice and revise their writing," it doesn't mention introducing students to the skills necesssary for composing.

I wonder if the term "literacy" is required due to the list of writing characteristics" they found crucial in the teaching of writing":

  • Students will demonstrate an understanding of process-invention, drafting, revision
  • Students will demonstrate an understanding of audience and context
  • Students will demonstrate critical thinking about their chosen topic
  • Students will demonstrate knowledge of the conventions of academic writing, including an awareness of sentence structure, mechanics, and spelling
  • Students will demonstrate an understanding of the research process and documentation styles
  • Students will demonstrate an understanding of diversity and social justice

Critical literacy and "an understanding of diversity and social justice" go hand-in-hand. As Ira Shor, a professor at the College of Staten Island, writes:

We are what we say and do. The way we speak and are spoken to help shape us into the people we become. Through words and other actions, we build ourselves in a world that is building us. That world addresses us to produce the different identities we carry forward in life: men are addressed differently than are women, people of color differently than whites, elite students differently than those from working families. Yet, though language is fateful in teaching us what kind of people to become and what kind of society to make, discourse is not destiny. We can redefine ourselves and remake society, if we choose, through alternative rhetoric and dissident projects. This is where critical literacy begins, for questioning power relations, discourses, and identities in a world not yet finished, just, or humane.

In other words, such a mission statement is necessary if composition should be an arena for social and political change. Karen Welch (Social Issues in First-Year College Writing, Academic Exchange Quarterly) writes on the debate concerning the nature of First Year Composition. Welch cites Maxine Hairston as opposed to this re-design of first-year composition:

I see a new model emerging for freshman writing programs…that disturbs me greatly. It’s a model that puts dogma before diversity, politics before craft, ideology before critical thinking, and the social goals of the teacher before the educational needs of the student. It’s a regressive model that undermines the progress we’ve made in teaching writing, one that threatens to silence student voices and jeopardize the process-oriented, low-risk, student-centered classroom we’ve worked so hard to establish as the norm. It’s a model that doesn’t take freshman English seriously in its own right but conceives of it as a tool, something to be used. The new model envisions required writing courses as vehicles for social reform rather than as student-centered workshops designed to build students’ confidence and competence as writers. It is a vision that echoes that old patronizing rationalization we’ve heard so many times before: students don’t have anything to write about so we have to give them topics. Those topics used to be literary; now they’re political. (180)

Some would say that the problem remains of how one can write about any topic without critiquing deeply the language on that topic, which implies the sociocultural elements of the topic, thus justifying introducing their own social agendas into the classroom. Perfect neutrality is not possible, but to the extent we can approach it, perhaps we should ask, How can we help students in composition courses to write more thoughtfully (i.e., critically) without injecting our own biases into the process?



Course-embedded assessment: What's it all about? one might wonder, thinking that all assessment is somehow embedded in course content. But that is only one aspect of it. Course-embedded assessment also refers to program- or institution-wide assessment that is embedded in all courses in order to focus attention on student learning. Donald Farmer, an architect of course-embedded assessment at King's College in Pennsylvania, writes:

Although many factors contribute to successful student learning, there are two factors that appear to be vital links connecting specific levels of achievement with anticipated learning outcomes. One is to transform students from being passive to being active learners and the other is to make assessment of learning an integral part of the teaching-learning equation. Assessment can play a critical role in developing students as learners if assessment is understood to be formative as well as summative. Assessment best serves as a strategy for improving student learning when it becomes an integral part of the teaching-learning equation by providing continual feedback on academic performance to students. This can be achieved most effectively by designing an assessment model in course work and intended to be both diagnostic and supportive of the development of students as learners. Assessment encourages faculty to focus on the actual learning taking place for students, and to recognize that teaching and assessment are strategies, not goals. (p. 199)

In other words, once an institution forms goals for student learning and develops criteria to measure how student learning outcomes meets those criteria, then colleges, departments, and instructors can develop curricula and activities to help students become active learners, and use assessment to provide feedback both to students and to the institution on how well students are meeting those goals.

Because outcomes and assessment are now discipline- and institution-oriented, curricula can be designed to focus on the development of skills across years and disciplines. For example, when students graduate, what sort of critical thinking skills does an institution want them to have? Then, how should freshman-level courses begin developing critical thinking skills, sophomore-level further that development, and so on? An integrated curriculum can help students better internalize critical thinking by (1) ensuring that it's a goal of all courses and (2) overcoming the compartmentalization and fragmentation of knowledge that occurs when skills are not transferred across years and courses. And flexibility is built in because although the goals are institution-wide, the curricula to obtain those goals are determined by individual instructors and departments.



The Times Online reports that a study shows that children are becoming less intelligent.

Far from getting cleverer, our 11-year-olds are, in fact, less “intelligent” than their counterparts of 30 years ago. Or so say a team who are among Britain’s most respected education researchers.

After studying 25,000 children across both state and private schools Philip Adey, a professor of education at King’s College London confidently declares: “The intelligence of 11-year-olds has fallen by three years’ worth in the past two decades.”

Apparently, this dumbing down is due to over-testing and under-challenging youngsters:

“By stressing the basics — reading and writing — and testing like crazy you reduce the level of cognitive stimulation. Children have the facts but they are not thinking very well,” says Adey. “And they are not getting hands-on physical experience of the way materials behave.”

I wonder if those in favor of NCLB are reading this report. Even if yes, it wouldn't matter, as research has shown that people disregard facts that contradict their positions.



Lee S. Shulman, President of the Carnegie Foundation for theAdvancement of Teaching and professor emeritus at Stanford University (via The Education Wonks), states,

Teacher education does not exist in the United States. There is so much variation among all programs in visions of good teaching, standards for admission, rigor of subject matter preparation, what is taught and what is learned, character of supervised clinical experience, and quality of evaluation that compared to any other academic profession, the sense of chaos is inescapable. The claim that there are "traditional programs" that can be contrasted with "alternative routes" is a myth.

We have only alternative routes into teaching. There may well be ways in which the teaching candidates of Teach for America or the New York City Fellows program meet more rigorous professional standards than those graduating from some"traditional" academic programs.

Compared to any other learned profession such as law, engineering, medicine, nursing or the clergy,where curricula, standards and assessments are far more standardized across the nation, teacher education is nothing but multiple pathways. It should not surprise us that critics respond to the apparent cacophony of pathways and conclude that it doesn't matter how teachers are prepared.

I am convinced that teacher education will only survive as a serious form of university-based professional education if it ceases to celebrate its idiosyncratic "let a thousand flowers bloom" approach to professional preparation. There should be no need to reinvent teacher education every time a school initiates a new program. Like our sibling professions, we must rapidly converge on a small set of "signature pedagogies" that characterize all eacher education. These approaches must combine very deep preparation in the content areas teacher are responsible to teach (and tough assessments to ensure that deep knowledge of content has been achieved), systematic preparation in the practice of teaching using powerful technological tools and a growing body of multimedia cases of teaching and learning, seriously supervised clinical practice that does not depend on the vagaries of student teaching assignments, and far more emphasis on rigorous assessments of teaching that will lead to almost universal attainment of board certification by career teachers.

The teacher education profession must come to this consensus; only then can accreditation enforce it. Commitment to social justice is insufficient; love is not enough. If we do not converge on a common approach to educating teachers, the professional preparation of teachers will soon become like the professional education of actors. There are superb MFA programs in universities, but few believe they are necessary for a successful acting career.

Schulman's announcement was brief, and so room was not available to develop his assertions, but on the surface, he makes quite a few claims and assumptions that are illogical.

1. Variation is conflated with chaos, and thus variation leads to a less than desirable level of quality in programs.

2. Standardization of curricula across the nation is equivalent to quality.

3. We must be like our sibling professions.

4. The initiation of new programs is equivalent to reinventing teacher education.

My brief responses are:

1. There is no evidence that variation diminishes quality of education. However, if diversity is good for learning, then one would think that variation of programs would be good for education. Of course, both should be supported by research.

2. One can standardize bad quality. Of course, Schulman is not thinking of that. I imagine that standards for content knowledge can be established, but how does one establish standards for creating rapport with students, for motivating students, etc.? Schulman says "love is not enough." I agree, but it is essential. Too much a focus on rigorous standards (and what's rigorous, something made more difficult?) will cause love to fade into the background, and so too the quality of teachers. As it is now, outside of a few educators, love is not a part of teacher education at all. Schulman's mentioning it is a red herring.

3. The claim that we should be like others is an appeal to the status of the other professions. It is not a consideration of whether education might (or might not) require other ways of achieving quality . Nor does it consider whether the siblings' professions methods are appropriate to education. It's simply assumed. Not to mention that the media constantly report on how, at least in business, college does not prepare students for the real world of work. We might argue that education colleges do not prepare students for real teaching in real schools, but that does not mean that we should be like other professions.

4. Nothing is invented de novo but builds on previous pedagogy. All new knowledge builds on what came before, is an integration of older sources. Still, it's not altogether odd that Schulman decries new programs. One favorite education bandwagon is "multiple intelligences," a theory that has no research supporting it.

Many with Schulman question the quality of teacher education programs. Although it's hard to imagine anyone denying the need for content mastery and good student teaching experiences and supervision, I'm not sure that equating quality with conformity to particular standards will achieve it. In any particular ecology, there are usually a variety of species. Should this concept apply to education programs and schools?

The notion of "converg[ing] on a small set of 'signature pedagogies' that characterize all teacher education" is one that has potential. If all species evolved from the four building blocks of DNA and all social interaction is governed by four relational models (Fiske), then there may just be a few crucial pedagogies that when combined in various ways allow for effective teaching in different contexts. But what would they be?



Complementing the previous entry, Hui Cao, an ESL graduate teaching assistant finds grading native-speakers' papers difficult, frustrating, and rewarding.

"Grading was the toughest job. You had to read 40 papers with the average length of seven--- for twice. You had to write pages of comments on each one and be ready for their arguments. The close reading of their shitty first drafts for days made me sick. It usually took me an entire weekend to finish that. I hided myself under my desk and cried after it was done. When I was able to cry--- believe it or not, that would be my best time. ...

If I was asked whether I was qualified to teach native speakers English composition with my sometimes awkward written English, my answer would be I don't know but certainly I could contribute much to their writing. Writing, especially academic writing which I teach is more a kind of training of people's mind, making them think more logically, rationally, clearly and concisely with the least fallacies. Since mind and language are two separate things, articulating thoughts through language is a kind of art. For academic writing, the art has to be shaped to satisfy public's taste. The strength of rhetorical strategies in the States is so powerful everywhere that they can massage people's life easily. Plus most of the guys do not really know how to think and write. Their over-fluent oral English and simplified reasoning are everywhere in their papers."



According to NetworkWorld, "University of Missouri-Columbia sociology professor Ed Brent apparently got sick of doing his job, so he's outsourced paper-grading to a computer." Assessment of papers is time-consuming, but it makes you wonder whether the professor's job will be outsourced to a computer some day.

Question: With the availability of information growing on the Internet, will universities become extinct? What sort of entities might take their place? Or how might universities evolve, if possible, to continue to be players in the knowledge game? Will the Internet facilitate the emergence of knowledge as a commodity? Or as a resource to be pursued?