(L2) Writing Assessment: Tools and Issues
Hyland’s Second
Language Writing and Casanave’s Controversies
in Second Language Writing both speak on assessing L2 writing; however, it
is Casanave’s text that strongly asserts that L2 writing assessment can be
inherently problematic. However, as Ken
Hyland states in the introduction of Chapter 8, his chapter is primarily
addressing “practical issues that teachers face when making decisions about
evaluating written work” (212). On the other hand, Casanave’s chapter concerns
assessing L2 writing accurately and fairly. While both texts involve practical
and theoretical issues, Hyland’s is more practical while Casanave’s proves to
be more challenging (and, indeed, I had to read it twice to comprehend these
important challenges to mainstream/traditional writing assessment strategies,
especially regarding accuracy and ethics).
Hyland’s chapter in SLW introduces and defines “formative” and
“summative” writing assessments, but Hyland fails to connect those two terms
with the reasons he lists for evaluating learners as well as the writing
assessment tools he introduces in the chapter. And, while he mentions them with a bit more detail in the genre text, he doesn't explicitly delineate formative and summative types of assessment during his discussion here, either. (He does come close, however, when listing the key principles of genre-based assessment. "Diagnostic testing," to me, would be classified as a formative assessment) (167).
Like Casanave’s chapter, Hyland’s chapters on assessment define and explain the terms “reliability” and “validity.” I know that these two
terms would always confuse me as an undergraduate minoring in Psychology and
majoring in Education (I had exposure to these two terms in two fields!);
however, I think Hyland wins for thoroughness of definition this week. I found
myself writing a note next to “reliability” (pg 215, SLW), “HOW CONSISTENTLY writing
is assessed” and another next to “validity” (pg. 217, SLW), “the ACCURACY of the
assessment.” While Hyland and Casanave both speak about how to increase
reliability by making assessments “rater-proof” (improving inter-rater
reliability), both come to the conclusion that indirect assessments (multiple
choice exams, closet tests, fill-in-the-blank exams), are rarely better writing
assessment tools than indirect ones. Hyland states that “It is widely agreed …
that direct measures are actually no less ‘objective’ than indirect ones for
the reason that test design itself is not an exact science” (217 SLW). Casanave
takes a more in-depth look at the issues with indirect assessment and posits
that such “quantification” of writing needs to be abandoned in lieu of more
accurate assessment tools. She states, “What matters most in fair writing
assessment are the very local and diverse practices that supporters of
traditional ‘objective’ assessment wish to downplay or to erase through
mathematical juggling” (123). In other words, such a scientific mode of writing
assessment will not work for such a subjective field such as writing. And
Casanave even introduces the Elbow’s concept of “liking” a work, a term that
could not be any more subjective (128).
Hyland – whose chapters, again, are more practical this week
than Casanave’s – spends much time discussing how to reliably and accurately
score writing. In the SLW text, he discusses the positives and negatives of holistic scoring,
analytic scoring, and trait-based scoring, and he gives examples of what
scoring sheets for these modes would look like. He also discusses portfolio
assessments in depth, their strengths and weaknesses, and which scoring method
“fits” best with portfolios. The use of portfolios in is a response to the
timed-writing tests which are not only hard to score but don’t offer much
validity when assessing how well a student can convey his or her ideas. A
portfolio offers numerous examples, the products of which were developed
(written, informally or formally evaluated, and revised) over time (lending to
a process-writing model). To transfer to a genre-based writing model, the
portfolio could simply have products exemplifying several different genre,
demonstrating mastery of more than just one genre.
Casanave’s writing project, however, is now more intriguing
to me than a portfolio. She made these
projects meaningful to her L2 students by asking the students to focus on what
they needed to learn or know how to do – and then they researched and wrote on
that. The genres used varied, as Casanave stated, depending on the desired
outcome of the project.
Casanave also addresses the issue of authenticity of students’
writing (real-world writing versus writing in the classroom for the teacher) as
well as the conflict of teacher as guide to the student writer as well as
evaluator of student writing. Casanave states that “Within their own
classrooms, however, writing teachers are inevitably caught in the dilemma of
being both supportive nonjudgmental readers and critical evaluators who
ultimately must assign a grade to student performance. These roles conflict.
Students know they will receive grades, even if individual papers are not
marked, and so tend to write for grades rather than to develop their writing”
(136). The writing project idea that Casanave offers in this chapter not only
addresses the need for authenticity (by allowing the students to develop their
writing around what they need to learn or know) but also diminishes the need
for constant teacher evaluation for grades of the students’ writing. As
Casanave asserts, writing projects allow for the conjoined concept of
assessment and instruction while also reducing the focus of grades and scores.
Hyland's genre chapter -- which I read last -- is what I was missing in terms of assessment. Hyland's last paragraph in the chapter simply states his point, that "teaching writing using genre approaches emphasizes the importance of making known what is to be learned and assessed" (192). Earlier in the chapter, Hyland stresses that a genre-based approach to writing (and, consequently, to assessment) helps ensure that teachers are thoughtful about what needs to be taught; students are aware of what will be assessed; and that teachers will have greater success with assessment validity.
I personally liked Hyland's genre text's description of systemic functional writing assessments. Perhaps it is because I'd LIKE to be able to take a more objective approach to assessment than what the subject of writing will easily lend (as we've discussed, writing assessment is problematic in that it is so outside of objective limits). As Hyland puts it, "...this model offers a clear picture of what counts as evidence fo fulfilling basic genre requirements" (172).
One thing I noticed with these chapters is how often I forgot I was reading about assessing L2 writing. Writing assessment here seems a bit blurred, with assessment tools, strategies, and issues applying to both writing by L1 and L2 writers. One area Casanave delineates as specifically L2 related is in ethics of assessment. Both authors discuss how timed-writing assessments are also problematic for L2 learners, but I would posit that these timed-writing assessments are almost as problematic for L1 learners as well. At any rate, this blurring between applicability to both L1 and L2 learners just further demonstrates how much SLW teachers, TESOL instructors, and L1 compositionists need to work together in discovering/defining theory and then applying that theory to pedagogical practices.
Hyland's genre chapter -- which I read last -- is what I was missing in terms of assessment. Hyland's last paragraph in the chapter simply states his point, that "teaching writing using genre approaches emphasizes the importance of making known what is to be learned and assessed" (192). Earlier in the chapter, Hyland stresses that a genre-based approach to writing (and, consequently, to assessment) helps ensure that teachers are thoughtful about what needs to be taught; students are aware of what will be assessed; and that teachers will have greater success with assessment validity.
I personally liked Hyland's genre text's description of systemic functional writing assessments. Perhaps it is because I'd LIKE to be able to take a more objective approach to assessment than what the subject of writing will easily lend (as we've discussed, writing assessment is problematic in that it is so outside of objective limits). As Hyland puts it, "...this model offers a clear picture of what counts as evidence fo fulfilling basic genre requirements" (172).
One thing I noticed with these chapters is how often I forgot I was reading about assessing L2 writing. Writing assessment here seems a bit blurred, with assessment tools, strategies, and issues applying to both writing by L1 and L2 writers. One area Casanave delineates as specifically L2 related is in ethics of assessment. Both authors discuss how timed-writing assessments are also problematic for L2 learners, but I would posit that these timed-writing assessments are almost as problematic for L1 learners as well. At any rate, this blurring between applicability to both L1 and L2 learners just further demonstrates how much SLW teachers, TESOL instructors, and L1 compositionists need to work together in discovering/defining theory and then applying that theory to pedagogical practices.
No comments:
Post a Comment