Methodology/Tools

Learning how to use a new tool can be frustrating at first because it feels awkward in your hands, has perhaps more power than you’re accustomed to, or more bells and whistles than your last tool. But once you have it figured out, you discover that you can’t live without it. Work tools are similar. Having the right tool is really important to help us do our job better, and sometimes we have to learn how to use new tools as they come along. And once we have a full tool box, then we need to determine which tool is right for which job.

So, where do you begin to choose the right assessment tool for a project? As with any assessment project, you start with what you are trying to achieve/measure. It could be learning, satisfaction, attitude change, values, habits of mind, behavior change, frequency of use, the list goes on. As you can imagine, there are tools that are better for each of these things. In Assessing Student Learning: A Common-Sense Guide (2009), Suskie describes the following assessment tools:

Interviews and focus groups, while time consuming, can help with these aspects of the assessment process (p. 195):

  • “Planning an assessment by identifying goals, issues, and questions that your assessment efforts should address.
  • Corroborating the results of quantitative assessments….
  • Understanding the results of quantitative assessments.”

Observation (direct measure of behavior)

  • This is used by supervisors when providing annual or semi-annual feedback to employees.
  • Watching how people interact with a space can tell you more about their behavior than asking them. Essentially you are observing their behavior and reporting on it.

Rating scales (used for a variety of measurements including satisfaction, attitude, disposition, opinion)

  • Likert scales on a survey, which can include scales such as “Excellent to Poor, Frequently to Never, Very Positive to Very Negative, Very Comfortable to Very Uncomfortable, Strongly Approve to Strongly Disapprove, and Much Better to Much Worse.” (p. 196) I recommend using a 4-point scale, which prevents people from taking the neutral way out. Including an NA option may be a good idea, though, depending on the item being asked.
  • Ecosystem rating scales, which include two questions for every item and can help you get more clarity about a topic. A sample is “how satisfied are you with X” and “how important is X to you?” With X being constant (threw a little algebra at you there!)

Reflection (best for measuring behavior, values, attitudes, habits of mind)

  • Minute-papers during a program, meeting, class, training, etc.
  • Short self-reflection questions during a program, meeting, class, training, etc.
  • Before and after reflection (for example before a beginning leadership training program, as participants “What is leadership.” And then at the end of the program, ask the same question and compare responses to see if participants were able to effectively answer the question after the training)
  • Journals with very clear learning goals (“Students should understand the journal’s purpose and what they will learn by creating it. Effective journals also require clear instructions or prompts. Students should understand exactly what they should write in the journal and how often. Finally, effective journals require useful feedback from faculty and staff on how well students are achieving the journal’s goals.” (p. 194) (Suskie also suggests an alternative to faculty/staff feedback: have students provide each other feedback.)

Rubrics (can be used to measure knowledge acquisition, proficiency, behavior, development over time)

  • This is a tool that provides specific descriptions of beginner, intermediate, and advanced proficiency in specific competencies. You can use it to rate a student’s proficiencies, performance, behavior, etc. You can also have the student use it to rate him or herself.

Surveys (can be used for assessing attitudes, values, dispositions, habits of mind, knowledge acquisition [if written as a test], or needs)

  • Use an intercept survey to intercept people as they enter or leave the space and ask them a few questions about their experience.
  • To measure knowledge acquisition, questions could be in the form of fill-in-the blank, open-ended, multiple choice, ranking items, pairing items, etc. These are all familiar ways to measure knowledge, but have historically been limited to the classroom. I recommend you try a few on your next survey and see what you learn. If one of your learning outcomes is that students will learn something new, then why not ask them to show you they learned it.
    • Here’s an example:
Match each office with its corresponding task:
A) Public Service Center 1) Resume writing help
B) Career Services 2) Join a student organization
C) Recreational Services 3) Into the Streets
D) Dean of Students 4) Take the swim test

References

  • Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco: John Wiley & Sons, Inc.