Assessment Narrative - University of Kentucky

This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.




 

Assessment Narrative

 

Institution: University of Kentucky
Type of Writing Program: First-Year Composition (required)
Contact Information:
Dr. Connie Kendall (former WPA)

(513) 556-1427

Deborah Kirkman (Assistant Director)
(859) 257-1115

Dr. Darci Thoune (Associate Director)
darci.thoune@uky.edu
(859) 257-6995

Assessment Background and Research Question

In the fall of 2004, the University of Kentucky’s University Writing Requirement was revised significantly from a two-course first-year composition sequence (ENG 101 and Eng 102) to a single four-credit-hour first-year writing course (ENG 104) linked to an upper-level graduation writing requirement. Since the new course, ENG 104, entailed dramatic changes relative to its “fit” within a newly conceived, two-tier structure and an explicitly inquiry-based curriculum, a comprehensive review of the course was undertaken in fall 2006. The timing was fortuitous: the English department was embarking on a self-study in preparation for external review; the University of Kentucky (UK) president had focused the attention of the campus on assessment; and interest in undergraduate writing instruction was high. The WPAs at UK took advantage of this confluence of circumstances to create a new and much more comprehensive assessment than had previously been undertaken. Similarly, heightened interest in assessment at UK allowed us to secure funding more easily.

The most important question guiding the assumption was: to what extent are pedagogical practices in ENG 104 encouraging and enabling students to achieve the expected learning outcomes for the course? These outcomes reflected the new emphasis on critical inquiry and experientially based research and writing, a shift from its former and more narrow focus on argument and exposition. Course outcomes focused on students’ developing abilities requisite to framing and writing projects of a substantial intellectual character, including comprehending, interpreting, and responding to written texts; developing complex questions and problems of public concern for research; and finding and incorporating pertinent academic scholarship and other sources, including personal experience, in their writing. From this broad goal, the following outcomes were defined. Students will:

  • Develop perspectives that take into account various forms of evidence and points of view
  • Engage in a range of writing activities to explore and express their experiences and perspectives
  • Research subject matter thoroughly and put readings into service of a stance or argument
  • Formulate a writing project coherently and organize it effectively
  • Collaborate with class members to investigate, share findings, and advance multiple viewpoints
  • Observe conventions of Standard Written English in paragraphs and sentences
  • Edit, proofread, and revise effectively
  • Develop a fluent prose style appropriate to the purposes for writing

Because UK’s writing program is very large (serving over 4,000 students annually and employing a cadre of roughly 100 writing instructors [adjuncts, TAs, and lecturers]), we also wanted to use this assessment to learn how consistently the effective writing strategies and critical thinking skills included in the outcomes were being incorporated into the course design and pedagogical practices of instructors. Additionally, of course, we wanted to learn about the level at which first-year students were employing these strategies and skills in their writing following their experiences in the course.

Assessment Methods

To assess these outcomes, we outlined three focus areas for programmatic review. Assessment instruments were designed to gather information from different perspectives on these areas.

Focus area I (course design and pedagogy) focused on the extent to which instructor assignments fostered the course goals of developing students’ critical thinking capacities and effective writing skills, and employed a 3-point rubric to gauge the explicitness or embeddedness of the learning outcomes for a 10-page research-based essay assigned across all sections of the first-year writing courses.

Focus area II surveyed student and instructor perceptions of the scope and quality of writing instruction in meeting course objectives, as well as the extent to which instruction fostered the development of cognitive skills and affective dispositions relative to critical thinking capacities.

Focus area III aimed at creating a scoring rubric that would help us determine the extent to which first-year student writing demonstrated effective writing strategies and critical thinking skills through the direct assessment of the 10-page research-based essay.

The process for determining criteria for this assessment was a crucial part of the project and reflects UK’s commitment to locally designed and developed writing assessment. Toward this end, the 10-member assessment committee engaged in a yearlong series of conversations about what we valued in student writing, what was essential for a good assignment, and which of the various approaches for the direct assessment of student writing seemed most applicable to our situation.

We began by constructing the two surveys (focus area II). We had a student survey already in place and thus felt this was a good place to start, tweaking the language and revising the questions to more specifically address the new curricular goals. For example, the old survey had students responding to the statement, “I am a better reader after having taken this course,” whereas the revised survey had students responding to the statement, “I improved as a critical reader after having taken this course.” We changed the language in the statement to, in part, determine whether instructors were using terms such as critical thinking and critical reading in their classrooms. However, we also included a variety of new statements in the survey, such as “Reading responses, journals, and in-class, and/or collaborative writing activities helped me to explore and develop ideas” and “I feel confident using a wide variety of methods for obtaining research (online databases, fieldwork, surveys, interviews, the library, etc.).”

We then created an entirely new instructor survey (no such survey existed previously) that addressed these same questions, but with adjusted focus, to elicit responses from a teacherly perspective. We used a 5-point Likert scale for both surveys. Additionally, each survey included space for narrative responses. Given that our assessment committee largely consisted of teaching assistants pursuing English literature degrees (UK does not offer graduate study in composition/rhetoric) with little formal training in composition theory/pedagogy (only one required course completed at the start of their programs) but with rich and various teaching experience in the composition classroom, our decision to begin with the creation of the two surveys helped us build community and lay the groundwork for the more difficult work that lay ahead—devising workable rubrics to assess instructor assignments and actual student writing. We distributed the surveys to 1,620 first-year students and 50 writing instructors at the end of fall semester. With the help of UK’s Office of Assessment, we were able to report preliminary results to our writing instructors at the all-staff meeting held in January 2007. We repeated only the student survey at the end of the spring semester, reaching another 1,260 first-year students, to bring our total student surveys collected to 2,880.

To develop criteria for the direct assessment of student writing (focus area III), our committee reconvened at the start of the spring semester and engaged in a series of structured conversations about our program’s “rhetorical values” for first-year writing. By far, these conversations proved to be our most contentious and ultimately the most productive for the creation of our scoring rubric—an analytical (as opposed to holistic) rubric that took into account dimensions of critical thinking skills and effective writing strategies by designating five specific traits that could be scored according to varying levels of student mastery. Disenchanted with the rubrics that were available to us from outside sources even with modification, the committee sought another approach to the formation of a rubric that could be more responsive to local needs and dynamics. We ultimately took our lead from Bob Broad’s What We Really Value and his notion of “dynamic criteria mapping” as the process by which we would identify the values that matter most to our UK first-year writing community, and thus would help us define the criteria for the scoring rubric.

The three focus areas in this assessment were not understood as hierarchical in nature. That is, we viewed each component as necessarily in conversation with the others, and we sought to design assessment tools that would help us triangulate our data. It was important to us to demonstrate to the audiences who would receive our final reports that understanding the status of first-year writing at UK meant more than directly assessing writing so as to draw quick conclusions about how well (or how poorly) that writing met university-approved standards. Instead, we were interested in showing the many nuances that evaluating student writing entails, in including student and instructor perceptions of the course itself, and in more clearly articulating what we meant by the terms identified in the learning outcomes.

Developed through an extensive process of structured discussion and revision, the scoring rubric settled on five primary traits of effective writing:

  • Ethos (engaging with issues, demonstrating an awareness of intended audience, using distinctive voice and tone)
  • Structure (arrangement that complements the writer’s purpose, logical structure, awareness of rhetorical moves)
  • Analysis (taking a stance, considering multiple perspectives, avoiding easy conclusions)
  • Evidence (selecting and incorporating appropriate sources, presenting evidence in a balanced way, using sources to advance ideas)
  • Conventions (use of appropriate citation methods and conventions of Standard Written English and attendance to sentence-level concerns appropriate to the project’s purpose)

Each of the five traits was scored on a 4-point rubric to describe the level of mastery: scant development, minimal development, moderate development, substantial development.

The director of UK’s Office of Assessment assisted the writing program directors in generating a credible sample size of first-year writing by using a simple random sampling method across all sections of ENG 102 and ENG 104. Before grading the 10-page research-based essay, individual instructors were directed to make clean copies of the randomly selected student essays and deliver these to the writing program office. Office staff then removed all identifying information relative to students, instructors, and sections, and then made copies available for the direct assessment. In total, we collected approximately 250 student essays. In preparation for the scoring sessions, the Assessment Coordinating Committee (the three directors and the writing program intern) read approximately 50 essays and, from their reading, decided on six anchor essays to facilitate “norming” (or what we called “articulation”) conversations prior to the scoring of “live” essays. Another 15 essays were used periodically to help the group recalibrate during the live scoring sessions. None of the essays that were used to help us articulate the criteria and standards was included in the final results. Three of these essays were used to check inter-rater reliability during the scoring session in a “blind” fashion (i.e., the raters were unaware that inter-rater reliability was being checked).

We designed a second rubric to assess instructor assignments (focus area I) last, after the direct assessment of student writing. Following the design of the essay scoring rubric, the instructor assignment rubric used the same five criteria. However, instead of using a Likert scale, we adopted, with the approval of the Office of Assessment, a 3-point scale indicating whether each criteria was explicit, implicit, or absent in the assignment.

Assessment Principles

During the programmatic review of first-year writing at UK, the following assessment
principles were generally viewed as the most important for our needs and purposes:

  1. The assessment process should build community among our large cadre of writing teachers—a group consisting mainly of teaching assistants and contingent faculty members, who often feel excluded from the life and governance of the English department community at large.
  1. Assessment should be both descriptive, helping instructors to understand what is going on in the courses, and informative, providing data to simultaneously help us improve our instruction and help our students achieve the course goals.
  1. Assessing student writing must always take into account the rhetorical values that writing teachers bring to the process, and must be locally designed and implemented. In addition, reports about the status of first-year students’ writing should not be limited to data collected through direct assessments, but should also include a range of data (e.g., student and instructor perceptions, the efficacy of writing assignments and course design) that can enable a WPA to bring much-needed context to the reporting of scores.
  1. Assessments should be low stakes for instructors and students. We attempted to minimize the perception of risk to both students and teachers in several ways: Student and instructor surveys were universally distributed but were nevertheless defined as voluntary. Because the surveys were anonymous, respondents could in fact opt out without repercussion. The surveys were also brief, requiring only 5 to 10 minutes to complete, and were given at the end of the semester and aligned with the usual university-approved protocol for course/instructor evaluations.

All identifying information was removed from all documents collected (e.g., surveys, writing assignments, essays), and all reported data were aggregated. Each component of the assessment was programmatic in nature and not tied to either course grades or instructor evaluation.

The assessment process, as it was designed and as it unfolded, was made transparent to all participants. Communicating early and often with students and teachers about the goals, uses, and meaning of the assessment not only helped to alleviate potential concerns but also helped us reinforce the idea of assessment as an ongoing and organic process at UK, responsive to local needs and altogether necessary for the health and maintenance of the first-year writing program.

Assessment Results and Follow-Up Activities

With the assistance of UK’s Office of Assessment, we are in the process of a full analysis of the data collected. Initial findings from the direct assessment (focus area III) suggest that students completing first-year writing at UK are well versed in how to use Standard American English and general documentation conventions. However, students appear to be less able to engage in sophisticated analysis, to establish a strong sense of ethos, to use supporting evidence effectively, and to evince an awareness of multiple perspectives on a given topic, all elements associated with good writing in our UK context. These factors speak to the need for the writing program to foster the development of our students’ critical thinking skills. A revision to the ENG 104 curriculum supports this development by promoting academic inquiry and the discovery of knowledge through experiential, collaborative learning.

Preliminary data from the student surveys suggest the following:

Program Strengths

  1. The mean score for overall preparedness and fairness of writing program instructors was 3 on a 4-point scale.
  2. The mean score for perceived opportunity for considering issues/questions of public significance from multiple perspectives was 3 on a 4-point scale.
  3. The mean score for teaching of writing as a process was 3 on a 4-point scale.
  4. The mean score for student perceptions of writing program instructors as caring, committed, respectful, and approachable individuals was 3.5 on a 4-point scale.

Opportunities for Improvement

  1. Students did not perceive their critical reading capacities as generally improved after having taken this course.
  2. Students wanted instructors to offer a wider variety of writing opportunities (e.g., more practice with new tasks without grades attached).
  3. Students did not perceive much time spent on issues of standard usage and grammar.

In General

  1. Relative to student perceptions of the course curriculum, there is general correspondence between ENG 102 and ENG 104, which suggests there is a discernable measure of parity across first-year writing courses.
  2. Students generally agree that first-year writing courses are either “more” or “much more” challenging than other 100-level or introductory courses.
  3. Students generally agree on the overall value of the course (3 on a 4-point scale) and the overall quality of teaching (3 on a 4-pont scale).

Assessment Follow-Up Activities

Thus far, these findings have influenced both our new instructor orientation and our mandatory all-staff meeting this year. We have used our initial findings to help train new instructors, to create professional development sessions, and to gradually begin shifting our instructors toward creating assignments that are driven by inquiry, focused on issues of public intellectual significance, draw on multiple perspectives, and utilize a variety of evidence and research methods.

In addition, the assessment plan we devised has been the subject of a university-wide conversation as UK revises its general education course requirements, a process that involves assessment of ongoing assessment practices relative to the stated learning outcomes among and across a variety of disciplines.

Assessment Resources and Transferability

The writing program received $16,000 from the UK provost’s office to conduct the review and assessment. This money, along with writing program funds (roughly $2,500), was used to compensate 15 raters during the scoring sessions, to bring an expert consultant to campus to conduct a half-day workshop on writing assessment, and to pay for the costs incurred during our weeklong scoring sessions (e.g., printing costs, food/beverage costs, etc.).

As mentioned briefly earlier, our Assessment Coordinating Committee spent many hours in conversation and consultation—meeting roughly every two weeks throughout the year for the purposes of identifying (and revising) our community’s rhetorical values for student writing, designing our assessment tools, selecting raters from a pool of applicants, and generally coming to terms with our goals and protocols for the direct assessment of student writing, in other words, our “live” scoring sessions held in May 2007. This was, in all ways, a yearlong project, fully dependent on the generosity and spirit of goodwill to be found among our writing teachers at UK. As is usually the case, financial compensation never fully “covers” the amount of time and energy spent on a project of this size. A final report will be submitted to the Department of English, the Office of Assessment, and the Office of the Associate Provost for Undergraduate Education.

The associate provost’s office has already indicated that our assessment should be viewed as a viable model for thinking about assessment across the university landscape. On a more personal level, we believe that our process of creating a rubric was both amazing and revealing. As teachers of college writers, we learned that the articulation of values about student writing—fraught as that is with contestation and emotion and deeply held beliefs about what makes a “good” piece of writing—is in fact the very foundation for what we do (and hope to do) in the composition classroom.

Reference

Broad, Bob. What We Really Value: Beyond Rubrics in Teaching and Assessing Writing. Logan: Utah State UP, 2003.