March

Home Schooling Debate
Stephen Downes wrote a brief note on his opposition to home schooling and has received quite a bit of flak about it, both in the comments to his post and elsewhere. I asked,

Can you expand on your position and provide some evidence for your claims?

He then made a 16-minute video On Home Schooling to detail his position and make it clearer, but although his position is clear, he doesn't seem to have any evidence for his opinions.

In a later note, he wrote of those writing elsewhere that the post by Dana Hanley was "the most constructive," and it is fairly thorough. Stephen plans to follow up with a more detailed response later, so let's see what evidence he has then.

Using Videos
On another note, his video made it clear to me that when using tools, we need to consider what they have to offer, how they can add to our message, and what we lose when using them. Videos can do things that mere talking cannot. Just consider MIchael Wesch's video, Web 2.0 ... The Machine is Us/ing Us. It would be impossible to get across the same meaning compressed into this video into a print-only text (unless perhaps it were book-length). A print-only text could only write about the meaning while this video shows the meaning while texting about it.

In contrast to Wesch's video, Stephen's video added nothing to the meaning that could not have been accomplished in a text-only medium. In fact, it accomplished less for three reasons:

  1. With print, I can easily cast my eyes up and down (scrolling if necessary) to clarify and confirm the meaning, while with a video I have to stop it and replay it if I miss or don't understand something.
  2. With video, I need to take notes to be able to see the whole picture while reviewing and reflecting on it instead of being limited to a sequential input of ideas.
  3. Videos require more time for listening than print for reading.

All three reasons involve time. This time requirement of viewing and understanding videos means that if they are to be used, they need to offer something that cannot be obtained in print only, something that is worth the extra time investment, such as using talking videos or podcasts with language learners who need the extra aural practice.



The 42nd Annual TESOL Conference (2008) is coming up soon, April 2-5, in New York City. Thursday afternoon, I'll be presenting along with three others on assessing writing . If you're coming to the conference and interested in assessing writing, here's a breakdown of what we'll be talking about.

Self-assessment
I'll be looking at how to help students in higher education learn to evaluate their writing, reflect on their writing, and take appropriate measures to improve their writing by

  • embedding assessment in the course objectives,
  • providing transparency in evaluative criteria, and
  • considering both product and process.

Basically, having students use the instructor's criteria for assessment gets them thinking in those terms, seeing more clearly course expectations, and hopefully giving them an understanding of assessment they can take with them after leaving our classrooms.

Multi-trait rubrics
John Liang will review a multi-trait rubric that assesses basic academic writing skills of incoming international graduate students in an MA TESOL program. Based on previous years’ assessment results, the rubric focuses on select component skills of academic writing (ability to comprehend the prompt, development of the argument, organization, grammar skills) instead of overall academic writing proficiency.

Techniques of assessment
Tim Grove provides a survey of techniques used to assess writing, including methods that minimize grading time, while remaining valid and reliable. He will examine rubrics, general comment sheets, error counting, error classification, personalized grading plans, Grade Negotiation, and even Rapaport’s “Triage Theory of Grading.” 

Online and holistic assessment
Tim Collins will review strengths and weaknesses of online and holistic assessment of writing, now frequently used on high-stakes assessments, and provide ideas on how instructors can prepare learners for success on these assessments.

In all of these, we make certain assumptions. Assessment

  • should reflect objectives,
  • be transparent to students,
  • be fair and effective,
  • provide feedback to students and teachers, and
  • enable learners to self-assess and take responsibility for their learning.



We rely on research to support us in improving our pedagogy, but what if we can't trust research due to misconduct, bias, or simply being wrong?

Misconduct
A little less than two years ago, I posted on Philip Langlais, vice provost for graduate studies and research at Old Dominion University, ("Ethics for the Next Generation",The Chronicle of Higher Education), who talked about academic misconduct:

Troubling reports about the ethics and professional conduct of university presidents, faculty members in fields as diverse as history and the sciences, and biomedical researchers have been sharing space in news columns recently with accounts of the greedy misdeeds of business and political leaders. The scrutiny has begun to reveal such gross misconduct as plagiarism and the falsification and fabrication of data in the hallowed halls of academe and research laboratories. Indeed, the Department of Health and Human Services reported in July that allegations of misconduct by scientific researchers in the United States hit an all-time high in 2004.

Bias
In addition to misconduct, bias can skew the findings of research, too. In medical research, money favors positive results, according to the AMA's Council of Science Affairs (Psychiatric News):

No one will be surprised to learn one of the conclusions in a report on scientific publication bias by the AMA's Council on Scientific Affairs: money matters in research.

The report, issued at last month's House of Delegates meeting in Chicago, states that "studies with positive findings are more likely to be published than studies with negative or null results, and an association exists between pharmaceutical industry sponsorship of clinical research and publication of results favoring the sponsor's products."

Bias can also occur through researchers' beliefs. Brian Switek reported on the bias in research on monogamy in gibbons:

Part of the reason we're recognizing this now is because of narrow-sighted research design, but also the desire that we may have for nature to vindicate our own social opinions and values, especially when it comes to primates.

Gibbons were formerly thought to be monogamous but apparently they may not always be.

Simply Wrong
How Science is Rewriting the Book on Genes points out that much of our knowledge on genetics has been overturned with recent new findings.

More imortantly, John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece (Kurt Kleiner, New Scientist), claims:

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.

John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

Now you'd expect that medical research, with its potential for litigation, would have a better chance of being correct than a coin toss. And if such a "concrete" field has problems in being correct, I imagine that the percentage in less concrete fields like education, sociology, and English would be even less correct.

In the same article, Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and professor of neuroscience at Johns Hopkins Medical School, states,

most working scientists understand the limitations of published research.

"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.

Hmm, research is something to think about, but not something to trust. I suppose that it should be expected that researchers tend to see what they want, just like politicians (see Emotion overrules reason), experts who predict politics (see Experts predict no better than non-experts), and wine tasters.

On wine tasters having problems, Jonah Lehrer posted on the subjectivity of wine, writing:

In 2001, Frederic Brochet, of the University of Bordeaux, conducted two separate and very mischievous experiments. In the first test, Brochet invited 57 wine experts and asked them to give their impressions of what looked like two glasses of red and white wine. The wines were actually the same white wine, one of which had been tinted red with food coloring. But that didn't stop the experts from describing the "red" wine in language typically used to describe red wines. One expert praised its "jamminess," while another enjoyed its "crushed red fruit." Not a single one noticed it was actually a white wine.

The second test Brochet conducted was even more damning. He took a middling Bordeaux and served it in two different bottles. One bottle was a fancy grand-cru. The other bottle was an ordinary vin du table. Despite the fact that they were actually being served the exact same wine, the experts gave the differently labeled bottles nearly opposite ratings. The grand cru was "agreeable, woody, complex, balanced and rounded," while the vin du table was "weak, short, light, flat and faulty". Forty experts said the wine with the fancy label was worth drinking, while only 12 said the cheap wine was.

I'm not quite sure where to take this, but it does make me wonder about how to trust any research, including mine, to guide my pedagogy. Academics and their supporters, as Ludwik Fleck would say, create a "harmony of illusions." Just look at the phonics vs. whole language reading wars. Perhaps we're just wine tasters in disguise.

Update 1:
For a new spin on bias preventing negative results, the New York Times has an editorial on "Virginia Commonwealth's Secret Deal" with the tobacco company Philip Morris. The university

has signed a contract to do research for Philip Morris that gives the company the final say over what results, if any, can be published.

The contract also stipulates that the university cannot respond to any news media inquiries about the deal and must promptly notify Philip Morris.

Update 2:
Robert Hughes reports on medical research fraud:

This week the editors of the Journal of the American Medical Assn. published an editorial criticizing the influence of the pharmaceutical and medical devices industries on research.

In an article this month, Catherine D. DeAngelis and Phil B. Fontanarosa write:

The profession of medicine, in every aspect—clinical, education, and research—has been inundated with profound influence from the pharmaceutical and medical device industries. This has occurred because physicians have allowed it to happen, and it is time to stop.
Two articles in this issue of JAMA provide a glimpse of one company's apparent misrepresentation of research data and its manipulation of clinical research articles and clinical reviews; such information and articles influence the education and clinical practice of physicians and other health professionals.

This editorial and the specific research studies reported in the April 16, 2008 issue of JAMAfunders, scientists and perhaps scientific journal editors have worked together to report favorable scientific findings that distort the real scientific evidence for the effectiveness of drugs and other medical devices.

OMB Watch reports on how the White House interferes with Smog Rule, and the Union of Concerned Scientists found in a survey that Hundreds of EPA Scientists Report Political Interference Over Last Five Years. In particular,

– 889 scientists (60 percent) said they had personally experienced at least one instance of political interference in their work over the last five years.

– 394 scientists (31 percent) personally experienced frequent or occasional "statements by EPA officials that misrepresent scientists' findings."

– 285 scientists (22 percent) said they frequently or occasionally personally experienced "selective or incomplete use of data to justify a specific regulatory outcome."

– 224 scientists (17 percent) said they had been "directed to inappropriately exclude or alter technical information from an EPA scientific document."

– Of the 969 agency veterans with more than 10 years of EPA experience, 409 scientists (43 percent) said interference has occurred more often in the past five years than in the previous five-year period. Only 43 scientists (4 percent) said interference occurred less often.

– Hundreds of scientists reported being unable to openly express concerns about the EPA's work without fear of retaliation; 492 (31 percent) felt they could not speak candidly within the agency and 382 (24 percent) felt they could not do so outside the agency.

Update 3:
Researchers Fail to Reveal Full Drug Pay:

A world-renowned Harvard child psychiatrist whose work has helped fuel an explosion in the use of powerful antipsychotic medicines in children earned at least $1.6 million in consulting fees from drug makers from 2000 to 2007 but for years did not report much of this income to university officials, according to information given Congressional investigators.

By failing to report income, the psychiatrist, Dr. Joseph Biederman, and a colleague in the psychiatry department at Harvard Medical School, Dr. Timothy E. Wilens, may have violated federal and university research rules designed to police potential conflicts of interest, according to Senator Charles E. Grassley, Republican of Iowa. Some of their research is financed by government grants.

Like Dr. Biederman, Dr. Wilens belatedly reported earning at least $1.6 million from 2000 to 2007, and another Harvard colleague, Dr. Thomas Spencer, reported earning at least $1 million after being pressed by Mr. Grassley’s investigators. But even these amended disclosures may understate the researchers’ outside income because some entries contradict payment information from drug makers, Mr. Grassley found.

In short, a conflict of interest existed: These researchers were receiving federal grant money to do research on drugs for kids, but did not report that they were at the same time receiving consulting fees from pharmaceutical companies.