Frequently Asked Questions

What happened to measuring satisfaction?

It is still important to measure satisfaction, and for some of you, absolutely critical to your work. In addition, here is what Bresciani, Zelna, and Anderson (2004) have to say about measuring satisfaction:

The assessment of student satisfaction, needs, and service utilization is very important. It has great purpose, particularly for constituents who place a heavy emphasis on students’ approval ratings. However, findings from this type of assessment do not necessarily help you understand your program’s contributions to the greater work of the university. In other words, the assessment findings do not tell you how your program contributes to student development and learning, and the findings seldom help you make decisions for continuous improvement of your programs. (p. 19)

How do we know when a program is successful?

This has come up quite a bit in conversations recently. Someone will tell me that a particular program was successful, and I will say, how do you know? Or what does that mean? Unfortunately my questions seem to burst their bubble a bit, which is not my intent. I really want to know how they are defining and measuring success. Often a program is deemed successful if a lot of people attended. But is that always a good measure of success? Sometimes it is. For example, if a lot of students went to the Slope Day breakfast and picked up a snack, that would be successful because the goal was to feed as many students as possible on that day. On the other hand, if a lot of students attended a leadership workshop, would attendance be your best measure of success? Here are some questions to ask yourself about your program (Bresciani, 2002; Bresciani, Zelna, and Anderson, p. 9):

  • What are we trying to do and why?
  • What is my program supposed to accomplish?
  • How well are we doing it?
  • How do we know?

These four questions will help you define what you mean by success, because you already know what it looks like! You might not have put words to it before, which is the challenge! But if you can answer these questions, how will that help you improve student learning, advocate for more resources, or articulate your work to others?

Why is this assessment stuff so hard?

Starting anything new is difficult, especially when you already have a packed schedule and haven’t budgeted time for assessment into your work. In addition, I hear from many of you that you aren’t confident that you have the skills to do assessment. Those are three biggies right there: new, time-consuming, and intimidating. YIKES! But I bet you can answer the questions listed in the success section below, can’t you? You know your work and you know what you are trying to achieve to help students learn and develop. You even “know it when you see it” in terms of that learning and development. And it is much more complex and nuanced than measures of satisfaction (not that there’s anything wrong with that!). Pam Shefman, Director of Planning and Assessment at the University of Houston, says that assessment is a tool not a task. Right now, many of us are looking at it as a task; eventually it will become a tool to help us do our work better and make an even deeper impression on the students/ clients/colleagues we serve.

“…engaging in systematic, meaningful assessment means that, similar to establishing other good practices, you have to begin to develop habits – habits of assessment.” (Bresciani, Zelna, and Anderson, p. 21)

How do we create and use assessments that make it possible for all voices to be included in data-informed decision-making?

Sonia DeLuca Fernández, Assistant Vice Provost for Educational Equity at Penn State, calls this a critical assessment approach, linking it to academic areas including critical theory. (Defined by Wikipedia as “a school of thought that stresses the reflective assessments and critique of society and culture by applying knowledge from the social sciences and the humanities”)

We start by treating people with respect. There are many ways to show respect to people during an assessment project:

Invite students from a variety of backgrounds to help you design your project. This will engage them in the project and help them feel a sense of ownership in the work. It will also help you design a project that is culturally relevant to students.

  • Test the tool you have developed on a small group of people to see if it is consistent with the values you want to communicate and if students feel comfortable participating in it.

Include a representative sample of participants in your project (if you are using a sample).

Seek additional ways to incorporate diverse perspectives into your work. Some ideas include:

  • Invite students to a focus group
  • Interview students one-on-one
  • Over-sample students from the under-represented population to hopefully get a more representative response rate
  • Ask someone whom the students trust to reach out to those students and invite them to participate (personal invitations go a long way!)

Once you have collected results, you will want to be sure you are analyzing and reporting on them in a way that incorporates the voices of all of the participants. For example, if certain populations/groups of students respond differently from the overall average, be sure to be explicit about describing their perspective in your analysis. Often, survey/assessment results are reported using overall average responses for all questions. While it is important to understand the overall Cornell student experience (for example), many times students of color may report differently, but because they are a minority of respondents, their perspectives are effectively erased by using the overall average response rate.

Here is an example of an assessment report that has glossed over the experiences of people of color in assessment:

  • Statistics about women leaders. Overall, women are gaining in leadership roles in the United States. However, when those numbers are looked at more closely, we can see that women of color are still struggling to make comparable gains. If we only report on the overall average results for all women, then the experiences of women of color are not represented, and assumptions are made that may not be accurate for all women.

How can assessment help us improve services and support to ALL Cornell students? By understanding what the student experience is as it is described by students from ALL backgrounds. Here are some tips:

  • Ask good questions. Remember: even if your intentions are good, your impact is what is felt and remembered, so check your work to confirm that your project will have a positive impact on participants!
  • Be invitational.
  • Work with affinity groups to design assessments and invite participation.
  • Be honest and transparent in your reporting.
  • Don’t assume you know someone’s experience…check it out!

Where can I learn more about assessment?

Check out these websites for more information about assessment:

Oregon State student affairs assessment site

NYU student affairs research and assessment site

Duke University student affairs assessment site

Student Affairs Assessment Leaders

NASPA Assessment, Evaluation, and Research Knowledge Community

ACPA Commission for Assessment and Evaluation

Association for the Study of Higher Education (ASHE)

National Institute for Learning Outcomes Assessment (NILOA)


References

  • Bresciani, M.J., Zelna, C. L., & Anderson, J. A. (2004). Assessing student learning and development: A handbook for practitioners. Washington, D.C.: National Association of Student Personnel Administrators.