Assessing digital projects can be one of the more challenging components of bring digital literacy and digital creativity into the classroom. This challenge only expands if students are allowed to pursue their own unique outputs (video, podcast, webtext, etc.) within the framework of the assignment. As such, when thinking about the kinds of assessment practices I might employ, I begin with a few simple orienting questions:
- What am I trying to assess? Product? Process? Learning?
- What types of artifact/evidence is needed to assess this work?
- Does the project belong to a genre? If so, does it already have established conventions and expectations?
- Should I apply a general heuristic for all projects or generate output-specific criteria? Also, should I generate the heuristic on my own or in consultation with students?
There is no one-size-fits-all model for digital project assessment. Not only does each discipline carry its own variety of expectations and implications, but each project conveys its own unique audience and each medium invites its own unique expectations. What might be a good fit for Project A just doesn't work for Project B. As such, each assignment (for each student/group) is likely will necessitate its own set of assessment practices.
But I've outlined four approaches below to serve as a starting place for assessment. They vary in depth and application, not to mention what they prioritize in assessment, but all have worked for me over the years.
SEC Approach
Perhaps the lowest-hanging fruit on the Assessment tree is the Self-Evaluation Criteria approach. This is approach works well when dealing with a lot of uncertainties: first time trying an assignment in a class, completing projects where the output doesn't have well-established conventions, having students create things for which there are few (good) few examples, etc.
This is one of my favorite assessment approaches because it offers a lot of flexibility and puts the responsibility on the students for establishing assessment criteria. Meaning, the basic principle or practice is to invite students to generate their own criteria:
- Invite them to determine (individually or collectively) the values upon which they want to be evaluated
- Require they provide a detailed explanation of those criteria and how they see them operating in practice.
- Require the students to evaluate themselves using that criteria.
- Once 1-3 are complete, then the instructor uses the student's criteria to offer his/her own evaluation (using the student rationales and evaluations as guide to how they apply).
Genre Approach
If the assignment output fits into an established genre (e.g., interview-based podcast), then this approach can work well not only for assessment but helping students understand practices related to specific disciplines. The basic approach is to study examples of the genre as a class and do in-class research to determine the key features of the genre. I even extend it into a relatively formative genre analysis:
- What are the key features of this genre? How do they relate to the author's purpose? How do they operate in relation to the target audience?
- What elements are common (or even required)? i.e., what makes this genre a genre? Style? Tone?
- What components are included and/or rhetorical strategies are used in "good" examples of the genre?
Using what we've learning through research and discussion, we collectively generate the evaluative criteria to be used in assessing projects.
Kuhn+2 Model
The Kuhn+2 model offers rigor with great flexibility. It provides a wholistic set of criteria for responding to digital projects and anchors that assessment conceptually and rhetorically. The model itself comes from the work of Virginia Kuhn (2008), "The Components of Scholarly Multimedia." In 2010, working in specific relation to the Institute for Multimedia Literacy programs at the University of Southern California, the model was extended by Kuhn, with DJ Johnson and David Lopez (see "Speaking with Students: Profiles in Digital Pedagogy"). Then Cheryl Ball (2012) refined it further in "Assessing Scholarly Multimedia."
The Kuhn+2 model offers a heuristic ecology focused in 6 key areas:
See below for guiding evaluative questions in each category.
Conceptual Core
- What is the project's controlling idea? Is it apparent in the work?
- Is the project productively aligned with one or more multimedia genres? (If so, what are they? How do you know?)
- Does the project effectively engage with the primary issue of the subject area into which it is intervening?
Research Component
- Does the project display evidence of substantive research and thoughtful engagement with the subject matter?
- Does it use a variety of credible (and appropriate) sources and cite them appropriately?
- Does the project deploy more than one approach to the issue?
Form & Content
- Does the project's structural/formal elements serve the conceptual core?
- Does the project's design decisions appear deliberate and controlled? Are they defensible?
- Is the project's efficacy unencumbered by technical problems?
Creative Realization
- Does the project approach the subject in a creative or innovative manner?
- Does the project use media and design principles effectively?
- Does the project achieve significant goals that could not be realized on paper?
Audience
- Is the target audience for the project apparent in the work?
- Does the project work at the appropriate levels (of language, design, function, etc.) for its target audience?
- Has the project been created with an attentiveness to the experience it offers its targeted audience?
Timeliness
- Is the project timely in its engagement/focus?
- If not, does the project attempt to demonstrate why it is relevant to contemporary matters/concerns?
Example Application: Scrolling Digital Essay Rubric
As part of the ENG-W171: Projects in Digital Literacy and Composition course at Indiana University, the co-instructors generated a point-based rubric for the first Scrolling Digital Essay assignment. This rubric blended the focal elements of the Kuhn+2 model with traditional assessment elements of IU's first-year composition ENG-W131.
Learning Record Method
The LRM comes from the work of Margaret Syverson and asks the students to critically engage in self-reflection as a part of the project work.
Students collect evidence throughout the semester and/or process of completing the project: i.e., they create work logs, curate email exchanges, produce/archive self reflections, and the like. At the end of the semester/project, they use that evidence (gathered in their learning record) to argue (i.e., make the case) for the grade they feel they deserve.
The assessed grade, then, is based partially on what was created, but more overtly focuses on (a) the students ability to showcase their learning and/or to demonstrate the value of that learning and (b) what they've learned, not necessarily what they've produced.
For a fuller exploration of LRM, click the button below to visit the Learning Record website.
Credits:
Created with images by Andrea Sonda - "Compass" • Prostock-studio - "Young serious Arab man wearing headphones, using laptop computer, working online from home" • Vasyl - "Young people work in modern office" • DC Studio - "Smiling teenager entrepreneur browsing information on internet working at business project. African american businessman sitting at desk table analyzing financial graphs on computer. Remote work" • bnenin - "Portrait of a beautiful student girl using phone while sitting in the cafe outdoors." • kanchitdon - "Indian / Asian college student working with laptop computer library." • CloudyTheater - "Flat lay of protractor compass set rulers" • RDVector - "Abstract background . DNA molecule with X chromosomes" • vegefox.com - "network" • vegefox.com - "data"