Assessment Glossary

Adapted from:

  • Timm, D., Davis, Barham, J., McKinney, K., & Knerr, A. (2013). Assessment In Practice: A Companion Guide to the ASK Standards. Washington, D.C.: American College Personnel Association.
  • Gavin Henning
  • Auburn University

Types of Assessment
Methods of Assessment
References

A-BCD-HI-NO-QR-Z

Accreditation – a quality control process in which institutions or programs voluntarily engage in a rigorous review for the purpose of demonstrating compliance with a set of standards established by the accrediting organization.
Administrative Review – an internal, divisional review of an administrative unit to determine customer satisfaction, areas for improvement of service effectiveness and efficiencies, and continuous improvement. A review generally takes 6-12 months; it should be connected to the strategic planning process.
Aggregate – report of the results at a sum¬mary versus individual case level.
Alignment – the process of insuring or strategically developing program/services that reflect the stated goals of the department, division and institution.
Analysis – the process by which data collected is transformed into information that can be shared and used.
Anonymity – means there is no way to identify a participant in the assessment process.
Assessment –actions taken to gather, analyze, and interpret information and evidence to support the effectiveness of institutions, departments, divisions, or agencies.
Assessment Cycle – the cycle refers to the full sequence of assessment activities including identifying outcomes, determining methods, planning assessment, gathering evidence, analyzing and interpreting evidence, sharing results and implementing change.
Assessment Plan – the assessment plan is the intentionally developed sequence of activities that ensures coherence from program planning through implementation.
Baseline Data - Information that serves as a basis for comparison in assessing a program’s impact or effectiveness.
Baseline Assessment Tool - Campus Labs’ product that provides the technology, resources, and consultation required to create an integrated, coordinated, and comprehensive assessment approach. The purpose of Baseline is to connect and translate assessment data in order to improve programs and services both inside and outside the classroom.
Campus Labs - Campus Labs can be used to collect information from students to assess the impact of programs and services. Campus Labs serves over 650 institutions combining data collection, reporting, organization, and campus-wide integration.  SCL has the following Campus Labs’ products: Compliance Assist and Baseline.
CAS – Council for the Advancement of Standards in Higher Education.
CAS Standards – standards which help professionals create high-quality programs and services.
Closing the Loop – the process of utilizing data for improvement or modification of a program, service, or department.
Coding – the process of translating raw data into meaningful categories for the purpose of data analysis. Coding qualitative data may also involve identifying recurring themes and ideas.
Compliance Assist - Campus Labs’ tool that is a fully integrated and comprehensive online solution for managing institutional research, planning, and accreditation needs. SCL uses this product for documenting institutional effectiveness and for strategic planning.
Confidentiality – ensuring that a participants’ identity is difficult to pinpoint because information is gathered or reported in a way that prevents someone from putting the various data and demographic information together to identify a specific participant.
Conformability – the extent to which the results of the assessment project make sense.
Correlation – a relation between statistical variables or phenomena which tend to vary, be associated, or occur in a way not based on chance alone – such as, through a correlational statistic with a max of 1.0.
Credibility – the process of providing information that is valid and credible to the larger audience.
Crosstab - Allows you to view the number and/or percentages of responses in a particular combination based on answers from two questions.
Curriculum Map – a chart that shows where and how in the curricular program outcomes are addressed, to ensure completeness and avoid excessive overlap.
Dashboard - Provides a summary view of key performance indicators relevant to a particular objective.
Data – information gathered for the purpose of research, assessment, or evaluation.
Dependability – the extent to which decisions made throughout an assessment project are appropriate and consistent.
Descriptive Statistics - Descriptive statistics are the main aspects of a collection of quantitative data. Descriptive statistics include measures of central tendency (mean, median, mode), measures of variation (standard deviation, variance), and relative position (quartiles, percentiles).
Developmental Outcomes – detailed statements, derived from program goals and grounded in professional theory, epistemology, and research that specifically describe what the student should be able to know and do as a result of the program/service; Often discussed in conjunction with learning outcomes, as in learning & development outcomes.
Direct Measures or Evidence – evidence that is tangible, visible, self-exploratory and compelling evidence of exactly what students have and have not learned. They include both objective exams and performance measures such as evaluations of demonstrations, internships, and portfolios that are evaluated by individuals other than the instructor.
Effect Size – how practical significance is expressed. It is a way of quantifying the size of the difference between two groups. This statistic is calculated and expressed differently depending on the type of analysis.
Ethics – right from wrong, appropriate actions instead of inappropriate. It involves abiding by established professional standards and following principles of ethics (respect autonomy, do no harm, benefit others, be just, and be faithful).
Evaluation - The analysis and use of information collected in the assessment process.
Formative - assessment designed to provide useful information during the conduct of a program, process, or learning experience that can be used to make changes as the program/experience proceeds.
Focus Group – group discussions that are intentionally designed to gain in-depth discussion around a specific topic. These groups are typically led by trained moderators with questions that have been developed prior to the session. The intent of focus groups is to examine feelings, perceptions, attitudes, and ideas.
Generalizable – applicable to a larger population.
Goal – the end result. A goal makes an element of the mission statement more tangible, but it is still broad enough that there may be a number of steps or ways to achieve it.
Indirect Evidence – evidence that consists of proxy signs that students are probably learning – it is less clear.
Institutional Research Board (IRB) – the group that is responsible for reviewing and certifying studies involving human subjects. They provide the policies and guidelines to protect human subjects. Review by an IRB is typically required when wanting to share findings beyond the campus community.
Learning Outcomes – statements of what students will be able to do, know, or believe as a result of participating in a learning activity which could be a class, a project, an educational program, or an individual interaction.
Mapping – refers to identifying linkages between mission and goals at each level.
Mean – the average number received by summing the values and dividing by the number of observations.
Measure or Assessment Measure – an assessment measure is a data source or tool used to indicate outcome attainment. While it is desirable to use multiple assessment measures over different points of time, each outcome must have at least one assessment measure.
Measures – instruments, devices, or methods that provide data on the quantity or quality of the independent or dependent variables.
Median – the middle case average in a rank-ordered set of observations.
Method – the approach taken for data collection – qualitative, quantitative, or mixed design.
Methodology – the epistemological approach to how data will be gathered.
Mission – a statement that clarifies purpose of an organization. A mission statement can be at the institutional, divisional, departmental, or programmatic level.
Mixed Method – combination of qualitative and quantitative methodologies in an assessment project.
Mode – the average of the most frequently observed value.
Objective – the intended effect of a service or intervention, more specific than a goal.
Operational Outcomes – illustrates what an office should accomplish.
Outcome – statements of outcomes as a consequence of an intervention or intentional experience. It describes what students should know, understand, or be able to do because of their involvement in the experience.
Population - An entire group of individuals who comprise the same characteristics (Creswell, 2009, p. 646). Often written as “N”.
Practical Significance – indicates whether the difference is large enough to be of value in a practical sense.
Pre-post test/assessment – administering the same assessment before and after a program, service, training, etc.
Program Outcomes – illustrates what a program should accomplish.
Program Evaluation – program evaluation includes any process or activities designed to determine whether a program has achieved its stated objectives and intended outcomes; evaluation implies a judgment of merit and effectiveness.
Program Review – program review is generally used to describe an institutionally-mandated process of systematically studying units to determine effectiveness, contribution to institutional mission and goals and fiscal viability, often for the purpose of resource allocation and strategic planning or decision-making. At Cornell, Program Review includes an internal SWOT analysis and review and an external review with two-to-three national experts visiting campus to examine the program and make observations and recommendations. The entire review process takes about 18 months. The review cycle is 5-8-years.
Qualitative – analysis used to tell a story or demonstrate key themes. Detailed descriptions of people, events, situations, interaction, and observed behaviors.
Quantitative – data collection that assigns numbers to objects, events, or observations according to some rule. Generally analyzed using descriptive and inferential statistics.
Random Sampling - A process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance.
Reliability – consistency of a set of measurements; the extent to which they measure the same thing over repeated administrations.
Representative Sample - Sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample.
Research – involves the collection of information for the purpose of gaining knowledge, developing theory, or testing concepts and constructs.
Rigor – what makes a strong study, the degree of trustworthiness.
Rubric – an established set of criteria by which information is being measured, categorized, or evaluated.
Sampling – The manner in which participants are selected. There are various types – probability, which allows you to make inferences about a population, and non-probability, which does not allow you to make inferences to a larger population.
Self-Study – an internal assessment used to evaluate programs including quality and effectiveness in reference to established criteria.
Statistic – a variable used in a summary description to estimate a population parameter. “Statistics” also refers to a range of techniques and procedures for analyzing data, interpreting data, displaying data, and making decisions based on data. In a second usage, a “statistic” is defined as a numerical quantity (e.g., the mean).
Strategic Plan – the purpose of a strategic plan is to align an organization with its environment. The process of planning should involve all levels of the organization and can take a year or more to complete. Generally strategic plans are three – to – five- years in duration. Components of a strategic plan include:

  • Mission of the organization
  • Vision
  • Values
  • Goals: high-level concepts that relate to the mission
  • Priorities: set specific direction for goals
  • Outcomes or objectives: a set of measurable, realistic outcomes that collectively support goal attainment
  • Evidence toward outcomes/objectives: description of how the outcomes/objectives will be achieved, including resource requirements

Summative – assessment designed to provide useful information at the culmination of a program, process, or student’s learning experience.
Survey – method of collecting information from people about their characteristics, behaviors, attitudes, or perceptions. Most often surveys are questionnaires or structured interviews with a specific set of questions.
Transferability – examines the extent to which the results can be transferred or applicable in other settings.
Triangulation – when two (or more) different methodologies or sources of data are used to interpret or explain a phenomenon.
Trustworthy – built on credibility, transferability, and dependability of a study. Provides evidence that the assessor developed an assessment that was credible, dependable, and could be repeated with similar results.
Validity – determines if the instrument measures what it is supposed to measure and includes construct, criterion, and content validity.


References

  • Cresswell, J.W. (2009). Research design: Qualitative, quantitative, and mixed method approaches. Los Angeles: Sage.
  • Free Social Science Dictionary. (2008)Socialsciencedictionary.com .
  • Dictionary.com
  • Glossary A-Z. Education.com;
  • Glossary. Institutional Review Board. Colorado College.
  • Glossary of Key Terms. Writing@CSU. Colorado State University.
  • Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Service Technology. University of Southampton.
  • Jupp, Victor.(2006) The SAGE Dictionary of Social and Cultural Research Methods. London: Sage.
  • Palomba & Banta, (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education.  San Francisco: Jossey-Bass.
  • Schuh, J.H., Upcraft, M.L. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey- Bass.
  • Schuh, J.H., Upcraft, M.L., & Associates (2001).  Assessment practice in student affairs: An applications manual. San Francisco:  Jossey-Bass.
  • Timm, D., Davis, Barham, J., McKinney, K., & Knerr, A. (2013). Assessment In Practice: A Companion Guide to the ASK Standards. Washington, D.C.: American College Personnel Association.