Assessment

The 42nd Annual TESOL Conference (2008) is coming up soon, April 2-5, in New York City. Thursday afternoon, I'll be presenting along with three others on assessing writing . If you're coming to the conference and interested in assessing writing, here's a breakdown of what we'll be talking about.

Self-assessment
I'll be looking at how to help students in higher education learn to evaluate their writing, reflect on their writing, and take appropriate measures to improve their writing by

  • embedding assessment in the course objectives,
  • providing transparency in evaluative criteria, and
  • considering both product and process.

Basically, having students use the instructor's criteria for assessment gets them thinking in those terms, seeing more clearly course expectations, and hopefully giving them an understanding of assessment they can take with them after leaving our classrooms.

Multi-trait rubrics
John Liang will review a multi-trait rubric that assesses basic academic writing skills of incoming international graduate students in an MA TESOL program. Based on previous years’ assessment results, the rubric focuses on select component skills of academic writing (ability to comprehend the prompt, development of the argument, organization, grammar skills) instead of overall academic writing proficiency.

Techniques of assessment
Tim Grove provides a survey of techniques used to assess writing, including methods that minimize grading time, while remaining valid and reliable. He will examine rubrics, general comment sheets, error counting, error classification, personalized grading plans, Grade Negotiation, and even Rapaport’s “Triage Theory of Grading.”

Online and holistic assessment
Tim Collins will review strengths and weaknesses of online and holistic assessment of writing, now frequently used on high-stakes assessments, and provide ideas on how instructors can prepare learners for success on these assessments.

In all of these, we make certain assumptions. Assessment

  • should reflect objectives,
  • be transparent to students,
  • be fair and effective,
  • provide feedback to students and teachers, and
  • enable learners to self-assess and take responsibility for their learning.



I just listened to an interesting session at Computers and Writing 2007 on the role of feedback and assessment in first-year composition. Fred Kemp, Ron Baltasor, Christy Desmet, and Mike Palmquist talked about how they used online learning environments as sites for assessing learning and teaching.

Fred Kemp talked about Texas Tech University's ICON system in which

  • class time is cut in half,
  • assignments are doubled or tripled,
  • all relevant interactions are online,
  • students meet in a classroom once a week to support those interactions, and
  • grading and commentary are anonymous with two readers on drafts.

This particular system helps to make the composition program an adaptive, feedback system that gains knowledge over time and is not dependent on rotating faculty and program directors. The data collection that is built into the system has shown that some assignments generate better grades than others, thus indicating where to make changes in the program. For instance, pulling back from having intensive peer reviews (12-13 a semester) has shown a decrease in students' GPA, suggesting that their writing has worsened. Next year, they're reinstating the peer reviews, and if the GPA increases, then there will be a strong correlation for the effect of intensive peer reviews on learning to write.

Mike Palmquist talked about Colorado State University's Writing Studio, a combination instructional writing environment and online course management system. As in ICON, the system collects data on how people are using the site by tracking their activity as they log in, which can give guide the program on which areas need to be strengthened, or vice versa. One question to be answered is, "How does technology shape the teaching and learning in writing courses?"

Ron Baltasor and Christy Desmet talked about the University of Georgia's emma system that embeds meta-data via markup in documents that are uploaded to the server. They have three ongoing projects that look at errors, revision, and citations. One finding from the citation project was that good library instruction works best in conjunction with instructor prompts for citations, but that library instruction alone showed no improvement.

Although the three universities have different approaches, they all show the value of electronic systems that can provide feedback to programs for improving instruction and composition programs.



As Andrew Goldenkranz, Principal of Pacific Collegiate School in Santa Cruz, California, says in an interview,

NCLB has been damaging in practice, even though I think it was not a bad idea in principle.

Goldenkranz should know as this year he's losing Jefferds Huyck, a teacher who has a doctorate from Harvard in classics, 22 years of teaching experience, and 16 students who won honors in a nationwide Latin exam (Freedman). Why? Because he doesn't have a teaching certificate.

Any idea without flexibility, like the NCLB in this case, can create more problems than it solves. And teaching without flexibility can stifle learning and ability, too. I remember while in the 7th grade, my math teacher forced me to show how I solved my long division problems. I had simply been doing the operations in my mind and writing down the answers. Not believing I could do divisions involving 3-figure divisors, she had me demonstrate. Although I did demonstrate several problems for her, she decided that it was more important to follow her rules. Within a few years, that ability had evaporated. So, what rules do we enforce that we could be more flexible about?



Jay Mathews (Washington Post) writes that Confidence in math doesn't always equal success. Reporting on a study from the Brookings Institution, he writes,

countries such as the United States that embrace self-esteem, joy and real-world relevance in learning mathematics are lagging behind others that do not promote all that self-regard.

Mathews includes pro and con perspectives on this report. Of course, confidence based on a lack of reality doesn't bode well for success in one's life. Some time ago, I remember reading about a study that showed that competent people usually have less confidence than incompetent folks, at least initially, that their ability to do something is better than the average in the room.

In foreign language (and other) education circles, we do our best to make the classroom a safe haven for students and try to relate the classroom learning to their own lives. It's possible that some work harder at making everyone feel good than at learning. Even so, it's hard to see why having fun and making things relevant would reduce learning. The only factor I can think of is that countries that focus on lots of drills will do better on a test that reflects that type of learning. Those scores say little about whether students can employ those skills outside of the classroom. As Mathews cites Gerald Bracy, an educational psychologist as saying,

the report overlooked countervailing trends in Japan, Singapore and other countries that do better than the United States on eighth-grade math tests. Officials in those countries say their education systems are not yielding graduates who have the same level of creativity as American graduates. Some Asian nations have begun to copy aspects of U.S. education, including the emphasis on letting students search for answers rather than memorize them.

Still, it is important for our students to have an accurate sense of how well they are doing and how they can improve their abilities in various areas. Self-assessment and peer assessment, along with seeing their peers work, can help in this regard. For a portfolio system that includes these aspects, check out The Learning Record.



David Warlick at 2 Cents Worth (via Will Richardson who comments on David's post) provides a starting point for assessing blog posts with two sets of five questions in assessing blogs, one for the blogger and one for the reader. The blogger questions are:

- What did you read in order to write this blog entry?

- What do you think is important about your blog entry?

- What are both sides of your issue?

- What do you want your readers to know, believe, or do?

- What else do you need to say?

With just a little rephrasing, the reader questions become:

- What did the blogger read before writing?

- What was important about the blog entry?

- What were both sides of the issue?

- What do you know, believe, or want to do after reading the blog?

- What else needs to be said?

I like these questions because they provide feedback to students that help them consider, as David says, "broader aspects of the issues being written about." And I especially like the one about reading. Too often, students expect to write only from their own experience without reading, without understanding others' perspectives, without weaving those perspectives into their writing. However, I would change that question to:

- What are the different sides in this issue?

This rephrasing moves students from an "either-or," "us-them" mentality to a more nuanced picture fitting the complex reality of life.



Last September, Inside Higher Ed had an article on Kentucky outsourcing grading in its community college ("Outsourced Grading"). Now Lynn Thompson reports on its occurrence at high schools in Seattle ("School districts turn to paid readers for grading student essays", Seattle Times):

In the Northshore School District, some English teachers don't spend much time reading student papers.

In the Bellevue School District, some don't even grade the papers.

Both districts now rely on paid readers to evaluate and in some cases grade student essays in English classes; Seattle's Garfield High School is piloting such a program this year. The use of readers greatly reduces teacher workload and gives students more writing practice, but the trend raises questions about teachers' roles in inspiring and guiding students' work.

Although feedback can guide students' work, it's not clear how giving grades inspires or guides their work, unless one is considering them as reality checks. The real question, as noted by Stephen Miller, president of the Bellevue Education Association, is:

"All English teachers would agree that students become better writers by writing more. But is writing many essays more important than personal feedback from your teacher? We don't know the answer," he said.

But even this question assumes teacher feedback to be more personal than that of an outside reader. My questions would be, Is the feedback from the teacher significantly better than the outside reader's? Is feedback from the teacher on 1-2 essays more effective than an outside reader's feedback on 7 essays?

Much of the response against outsourcing reading and grading seems to be some sort of out-of-touch-with-reality smokescreen. All agree that high school teachers simply do not have the time to read, comment, and grade more than one or two essays a semester when teaching five 30-student-classes a day. Yet, Carol Jago, co-director of the California Reading and Literature Project, asserts,

What's lost is how teachers get to know their students through their writing. And students no longer know the audience they're writing for.

Most compositionists argue that one problem is students always writing for the teacher rather than a real audience. So, it's not clear how moving away from an undesired audience, the teacher, to an unknown one, is much worse. As far as getting to know students through their writing, is it really possible through the apparent limit in high school of one or two essays? More importantly, how do teachers keep informed about their students' writing?

According to Lance Balla, a curriculum and technology coach for the Bellevue schools,

the district built into the program several checks to keep teachers informed about their students' work. The teachers develop a scoring guide for each assignment and read three out of every 30 essays. Readers and teachers consult after each set of papers is graded, and teachers are expected to use the readers' comments to look for common problems and if necessary, adjust their teaching.

I'm not sure how well this works, but I do like the idea of adjusting teaching according to outside feedback. When teachers are the only ones commenting, there is no potential dissonance to help move teachers to reconsider their approach to writing instruction. In addition, the extra time from not grading can perhaps be applied toward those students who need the most help.

From the Inside Higher Ed article, Douglas Hesse, board chair of the Conference on College Composition and Communication, and professor English at Illinois State University, argues against outsourcing grading, saying that

grading was not a function that should in any way be removed from the faculty members. The process of reading a paper and evaluating it, Hesse said, is crucial not only for assigning a grade, but for thinking about how to work with a given student, for evaluating whether certain assignments are achieving their goals, for revising lecture plans, and more.

Hesse's points make sense to me at the college level, although I imagine that not all professors take the extra time to work with students and re-evaluate their pedagogy. For those who don't, it just might be a waste of money to pay other readers and graders. For those who do, reading and grading would seem to be good channels of feedback. I'm not sure that we should simply assume, however, that this feedback should be considered sacrosanct. I'm listened to experienced composition instructors who suggest ways of limiting the time for grading and commenting on papers to 15 minutes. I wonder how effective 15 minutes of feedback can be for the student, or for the instructor. Would doubling the amount of time significantly improve the effectiveness of the feedback? Perhaps not. Perhaps 15 minutes is enough to set students to moving in the general direction of better writing.

I suppose I have more questions than answers on outsourcing grading. But as my previous posting on "Learning takes place in an ecology" implies, grading takes place in an ecology, and what was appropriate at one time may require re-inventing to remain relevant to students, teachers, educational institutions, and the communities in which they are embedded.



In the New York Times, Sam Dillon reports on the ponderings of a higher education commission inPanel Considers Revamping College Aid and Accrediting". (To read the commission's reports, go here.) This is the same panel that has also considered introducing standardized testing into higher education. The panel is calling for more accountability in higher education and in the process attempting to remake education into a business image:

Charles Miller, a business executive who is the commission's chairman, wrote in a memorandum recently to the 18 other members that he saw a developing consensus over the need for more accountability in higher education.

"What is clearly lacking is a nationwide system for comparative performance purposes, using standard formats," Mr. Miller wrote, adding that student learning was a main component that should be measured.

Accountability is important, but the question is how to achieve it. Business doesn't have "a nationwide system for comparative performance." Of course, business has the pass/fail, or success/go-out-of-business model. Education doesn't have that "survival" accountability, although we're moving in that direction as state funding becomes less and less. Even so, would "accountability measures" be cost-effective? Just for a comparison, many businesses now are complaining about the Sarbanes-Oxley Act, which implements better internal controls over financial reporting, as being too expensive. Jill D'Aquila in her article "Tallying the costs of the Sarbanes-Oxley Act" writes:

The survey also reveals that total costs of first-year compliance with section 404 could exceed $4.6 million for each of the largest U.S. companies (companies with over $5 billion in revenues). Medium-sized and smaller companies will also incur significant additional costs to comply with section 404, the survey finding an average projected cost of almost $2 million. Interestingly, the projected costs are higher than originally anticipated based on an FEI survey conducted the previous year.

... the number of senior executives describing SOA compliance as costly had nearly doubled since its enactment, from 32% to 60%.

Miller also said,

he hoped to build consensus among the panel's 19 members as they work to issue a final report in August. But he expressed impatience with some academics who, he said, seemed resistant to change and oblivious that they could be overwhelmed by increasing costs and other challenges.

"Those who are squawking the loudest are those who have a private place to play and a lot of money, much of which comes from the federal government," Mr. Miller said. "What we hear from the academy is, 'We're the best in the world, give us more money and let us alone.' "

I've heard this complaint from others interested in improving education: Educators are stubborn about change, and they want no outside interference. Actually, business can often be the same. Even so, society has a stake in learning outcomes, and universities cannot be impervious to societal influence.

Elsewhere in the article:

And the commission appears to be fulfilling that mission. In its public meetings, panelists from Wall Street and elsewhere in the business world have criticized academia as failing to meet the educational needs of working adults, stem a slide in the literacy of college graduates and rein in rising costs.

Hmm. So, literacy problems are higher education's fault. I've seen some reports in the media that students are less literate than in years past. Even assuming that it's correct, is it academia's fault? That would mean that these students entered the university at an appropriate level of literacy and in four years lost it? That seems to discount all other societal players in the literacy game. In improving literacy, we need to take an improve-the-system perspective rather than a blame-one-player perspective.

As far as rising costs, that's true of many institutions in our society. Look at medical costs and CEO salaries.

Miller, in the earlier article on standardized testing for higher ed, said:

he would like the commission to agree on the skills college students ought to be learning — like writing, critical thinking and problem solving — and to express that view forcefully. "What happens with reform," he said, "is that it rarely happens overnight, and it rarely happens with a mandate."

It's hard to disagree with wanting students to be able to write, think critically, and solve problems. How can we measure those skills in a meaningful way?

And from Nicholas Donofrio:

Another business leader on the commission, Nicholas Donofrio, an executive vice president at IBM, said he was not a strong supporter of proposals that would increase the government's regulatory role.

"But the government has some role to play because it funds the aid programs, so it has some hooks into them," Mr. Donofrio said. "We want these people in academia to get real about the problems and the issues."

There's a considerable number of accusations here. I'm not sure how justified they are. I'm also not certain that the answers to "the problems and the issues" are clear, nor that the corporate world has any answers. "For airlines, bankruptcy becomes business as usual." Consider also Ford's and GM's slide toward bankruptcy. In fact, "US company bankruptcies may surge this year." And, of course, there are always the CEOs who are paid astronomical sums. For instance, Lee Raymond, the Exxon chief who retired this past year, averaged $144,573 a day over a period of 13 years and received $400 million his last year (Greg Robb, MarketWatch.com). I suppose if universities had the option of going bankrupt, they would be able to "get real about the problems and the issues."

Education is important, and we should strive to improve it. I have no useful suggestions for doing so, just a few thoughts. It's not clear to me why a university should be run like a business, and it's not clear to me why business "experts" believe they have insights into improving university education. Do we hear much about educators advising the corporate world on business problems? It's also not clear to me why (assuming that they are) education "experts" are resistant to outside advice. Why not evaluate the advice rather than its source? I'm not even sure why this post is in my blog. I suppose it's here because I believe that although accountability is important, that present methods to measure accountability end up in dumbing down instruction and damaging student learning, as shown clearly in George Hillocks' The Testing Trap: How State Writing Assessments Control Learning, and that disturbs me. And so does the image of education as a business. I prefer images of learning and civic responsibility.



John Liang, Timothy Grove, Sydney Rice, and I presented papers at TESOL 2006 on the theme of "Moving Toward Self-Assessment in L2 Writing."

John Liang began with an "Overview of Self-Assessment in the Second Language Classroom." His overview handout here (.doc) also has a good bibliography on self-assessment.

Next, I talked about using "Course-Embedded Assessment" (.doc) to help students learn to assess their writing. Generally speaking, course-embedded assessment refers to program- or institution-wide assessment embedded in general education courses in order to focus the curriculum on student learning. In my classes, I've incorporated the program rubric for assessing L2 writing in all aspects of my first-year composition courses--from modeling, using it to guide my feedback, having students use it to guide their feedback to others, and to guide their own self-evaluation--so that it becomes part of their mind-framework for looking at writing rather than remaining fragmented information and forgotten as soon as the semester ends.

Sydney's paper looked at "Focused Self-Assessment" (.doc) presenting three basic steps for students to become self-editors:

1. Provide input and examples of both effective and ineffective language use.
2. Involve students in peer review and peer editing, as well as self-editing.
3. Provide students with the key for productive self-editing.

Her approach uses "methodical and uncomplicated" rubrics, an approach that makes it clear and gives to students the tools for editing and revising their writing. Here are her other handouts (all are .doc): Summary, Overhead figures.

Timothy Grove discussed "Showcase Portfolios" (.doc) for helping students become better self-assessors. When students have to select and present their best work, they begin to learn how to evaluate their work.

John Liang ended the colloquium talking on "Toward a Three-Step Pedagogy for Fostering Self-Assessment in a Second Language Writing Classroom" (.doc). The three steps are:

Stage 1: Extensive teacher modeling
Stage 2: Teacher assessment with guided and independent peer assessment
Stage 3: Peer assessment leading to guided and independent self-assessment

One point John mentioned that occurred in all of our talks was the need for rubrics or something that would give structure to the students as they began to learn to assess their learning.



I've been reading up on course-embedded assessment, which I mentioned in an earlier post, and am wondering about the implications for my own classes. As a member of the ESL Program, I use the program's rubric for assessing my students and also to help focus them on areas in which they need improvement. Apparently, however, the rubric's criteria are somewhat elusive for my students. Actually, there are two elements out of the ten that I have to think about, too. If I want my students to better understand criteria of good writing, I've been considering ways to incorporate the rubric in other ways than simply their final grade.

One way I have recently incorporated the rubric is to use it as the basis for my feedback on rough drafts. Another is to have students use it to guide their reviews of classmates' papers. A third way I'm considering is having students write a paragraph to hand in along with a following essay on what aspects of the rubric did they work on to improve the present essay with respect to the previous one.

For students to grasp these criteria that I've been working with for ten plus years, consciously and unconsciously, using them 3-4 times a semester, once per paper is not enough. They need to spend time with them, to reflect on them, and to use them throughout the course on a variety of assignments.

For assessment to be formative, it should be embedded pervasively throughout the course so that the students continually receive feedback and so that they internalize its criteria. Such course-embedded assessment seems common sense to me.



I'm reading an article by Marinara, Vajravelu, and Young on assessing learning in a general education program. With respect to the composition aspect, they include its mission statement:

First-year composition introduces students to the skills necessary for critical literacy. Students will be expected to practice and revise their writing in contexts that mirror tasks they will perform throughout their academic and professional lives.

The mission statement took two months of discussing, arguing, and revising to craft, with one point centering around whether the word "literacy" should be in the statement. The authors don't go into why that point got discussed, but I'm curious, too. Literacy is related to composition, as one needs to critique texts that one uses in one's writing, in fact, to critique one's own writing. However, when crafting a two-sentence mission statement, one might think that the focus would be on writing itself. Although the statement mentions that students will "practice and revise their writing," it doesn't mention introducing students to the skills necesssary for composing.

I wonder if the term "literacy" is required due to the list of writing characteristics" they found crucial in the teaching of writing":

  • Students will demonstrate an understanding of process-invention, drafting, revision
  • Students will demonstrate an understanding of audience and context
  • Students will demonstrate critical thinking about their chosen topic
  • Students will demonstrate knowledge of the conventions of academic writing, including an awareness of sentence structure, mechanics, and spelling
  • Students will demonstrate an understanding of the research process and documentation styles
  • Students will demonstrate an understanding of diversity and social justice

Critical literacy and "an understanding of diversity and social justice" go hand-in-hand. As Ira Shor, a professor at the College of Staten Island, writes:

We are what we say and do. The way we speak and are spoken to help shape us into the people we become. Through words and other actions, we build ourselves in a world that is building us. That world addresses us to produce the different identities we carry forward in life: men are addressed differently than are women, people of color differently than whites, elite students differently than those from working families. Yet, though language is fateful in teaching us what kind of people to become and what kind of society to make, discourse is not destiny. We can redefine ourselves and remake society, if we choose, through alternative rhetoric and dissident projects. This is where critical literacy begins, for questioning power relations, discourses, and identities in a world not yet finished, just, or humane.

In other words, such a mission statement is necessary if composition should be an arena for social and political change. Karen Welch (Social Issues in First-Year College Writing, Academic Exchange Quarterly) writes on the debate concerning the nature of First Year Composition. Welch cites Maxine Hairston as opposed to this re-design of first-year composition:

I see a new model emerging for freshman writing programs…that disturbs me greatly. It’s a model that puts dogma before diversity, politics before craft, ideology before critical thinking, and the social goals of the teacher before the educational needs of the student. It’s a regressive model that undermines the progress we’ve made in teaching writing, one that threatens to silence student voices and jeopardize the process-oriented, low-risk, student-centered classroom we’ve worked so hard to establish as the norm. It’s a model that doesn’t take freshman English seriously in its own right but conceives of it as a tool, something to be used. The new model envisions required writing courses as vehicles for social reform rather than as student-centered workshops designed to build students’ confidence and competence as writers. It is a vision that echoes that old patronizing rationalization we’ve heard so many times before: students don’t have anything to write about so we have to give them topics. Those topics used to be literary; now they’re political. (180)

Some would say that the problem remains of how one can write about any topic without critiquing deeply the language on that topic, which implies the sociocultural elements of the topic, thus justifying introducing their own social agendas into the classroom. Perfect neutrality is not possible, but to the extent we can approach it, perhaps we should ask, How can we help students in composition courses to write more thoughtfully (i.e., critically) without injecting our own biases into the process?



Course-embedded assessment: What's it all about? one might wonder, thinking that all assessment is somehow embedded in course content. But that is only one aspect of it. Course-embedded assessment also refers to program- or institution-wide assessment that is embedded in all courses in order to focus attention on student learning. Donald Farmer, an architect of course-embedded assessment at King's College in Pennsylvania, writes:

Although many factors contribute to successful student learning, there are two factors that appear to be vital links connecting specific levels of achievement with anticipated learning outcomes. One is to transform students from being passive to being active learners and the other is to make assessment of learning an integral part of the teaching-learning equation. Assessment can play a critical role in developing students as learners if assessment is understood to be formative as well as summative. Assessment best serves as a strategy for improving student learning when it becomes an integral part of the teaching-learning equation by providing continual feedback on academic performance to students. This can be achieved most effectively by designing an assessment model in course work and intended to be both diagnostic and supportive of the development of students as learners. Assessment encourages faculty to focus on the actual learning taking place for students, and to recognize that teaching and assessment are strategies, not goals. (p. 199)

In other words, once an institution forms goals for student learning and develops criteria to measure how student learning outcomes meets those criteria, then colleges, departments, and instructors can develop curricula and activities to help students become active learners, and use assessment to provide feedback both to students and to the institution on how well students are meeting those goals.

Because outcomes and assessment are now discipline- and institution-oriented, curricula can be designed to focus on the development of skills across years and disciplines. For example, when students graduate, what sort of critical thinking skills does an institution want them to have? Then, how should freshman-level courses begin developing critical thinking skills, sophomore-level further that development, and so on? An integrated curriculum can help students better internalize critical thinking by (1) ensuring that it's a goal of all courses and (2) overcoming the compartmentalization and fragmentation of knowledge that occurs when skills are not transferred across years and courses. And flexibility is built in because although the goals are institution-wide, the curricula to obtain those goals are determined by individual instructors and departments.



The Times Online reports that a study shows that children are becoming less intelligent.

Far from getting cleverer, our 11-year-olds are, in fact, less “intelligent” than their counterparts of 30 years ago. Or so say a team who are among Britain’s most respected education researchers.

After studying 25,000 children across both state and private schools Philip Adey, a professor of education at King’s College London confidently declares: “The intelligence of 11-year-olds has fallen by three years’ worth in the past two decades.”

Apparently, this dumbing down is due to over-testing and under-challenging youngsters:

“By stressing the basics — reading and writing — and testing like crazy you reduce the level of cognitive stimulation. Children have the facts but they are not thinking very well,” says Adey. “And they are not getting hands-on physical experience of the way materials behave.”

I wonder if those in favor of NCLB are reading this report. Even if yes, it wouldn't matter, as research has shown that people disregard facts that contradict their positions.



According to NetworkWorld, "University of Missouri-Columbia sociology professor Ed Brent apparently got sick of doing his job, so he's outsourced paper-grading to a computer." Assessment of papers is time-consuming, but it makes you wonder whether the professor's job will be outsourced to a computer some day.

Question: With the availability of information growing on the Internet, will universities become extinct? What sort of entities might take their place? Or how might universities evolve, if possible, to continue to be players in the knowledge game? Will the Internet facilitate the emergence of knowledge as a commodity? Or as a resource to be pursued?