Skip to main content

Section 3: Assessment in Practice: Administrative & Special Programs Assessment

Introduction & Overview

Assessment of administrative & special programs is the process that examines an area’s success with achieving outcomes and institutional priorities. The results of this examination are incorporated into an institutional 6-year review for the area. The purpose of an institutional review for any area is to answer the following questions:

  • How are we doing from an over-arching operational college wide perspective?
  • Are we an efficient and effective organization?

 

Who participates in Administrative Assessment?

This category includes administrative areas or special programs such as the Learning Centers, WDCE, or the MC Library.

 

Organizing & Completing Assessment Activities

Each area at the College differs on how they choose to organize and complete their assessment functions. The organization of assessment activities is often dependent on several factors: the size of the area, the number of services offered, and the ratio of full-time to part-time faculty or staff. The styles of leadership within each area may also be a determining factor.

Below is the most common approach that areas use when completing assessment activities:

  • In many areas, it is a coordinator or director who acts as the lead for assessment activities who will often form workgroups to organize and complete reports for any major assessment requirements. Depending on the leader and the makeup of the area, these workgroups and the leads for the workgroups will either be determined by the Dean or Director of that given area, or by their designee (Chair, coordinator, etc.).

  • The area lead(s) for assessment will be responsible for the following activities:
    • Submit assessment plans for determining outcome or goal achievement levels.
    • Consult with faculty or staff to formulate a data collection plan.
    • Submit the Year-3 CAR updates form to the Office of Assessment for review, feedback, and approval.
    • Organize and oversee the data collection process.
    • Organize a workgroup or meetings with faculty or staff to discuss and reflect on the data indicating the levels of achievement on specified outcomes or goals.
    • In conjunction with all workgroups, prepare and submit all required reports, to include:
      • Year-3 CAR Recommendations Update
      • Year-6 College Area Review (CAR) Report

  • Regardless of the organizational approach used, the following apply:
    • The area leaders are responsible for providing support and guidance for area faculty or staff (as needed) throughout the assessment process and may play an active role in assessment activities (when applicable for that area). Deans or Directors are responsible for ensuring that any assessment requirements for their areas are completed and submitted by the required due dates.

 

Examining Goals/Outcomes & Alignment with the College Mission

Administrative areas should consistently review their goals and ensure that these goals are aligned with the current MC strategic themes. During the CAR process, areas are required to list their unit’s goals and provide the status of any initiatives taken to support the theme(s), where applicable. Discussion among area members should take place that considers what instruments or methods will be used to assess these initiatives for effectiveness.

Creating Your Assessment Plan

Your Assessment Instrument

Assessment instruments represent the tools that will be used to collect data that reflect a program or area’s performance in achieving related outcomes or goals. These may take many forms and depend on the area services, course material, and the goals or outcomes being measured. Examples include, but are not limited to, the following:

  • Student work samples (where applicable): written exams, oral exams, practical exams/exercises, papers, course assignments, lab exercises, capstone projects, high impact practices, etc.
  • Surveys of students and/or staff
  • A data set from specific work products that reflect the area’s mission or goals.

 

Benchmarking

Benchmarking refers to the process by which a discipline or program sets a standard of measurement (or benchmark) for evaluating or comparing performance during and after the assessment and reassessment process. For instance, what percentage of students is expected to attain proficiency in a given competency, or possibly exceed expectations? Did performance increase for a specific area after a new system or new technologies were implemented? How many students utilized a particular tool, service, or opportunity? (These are just a few examples of performance measurement – approaches may differ depending on the specific outcomes/goals being measured).

Please see the quick tips below regarding benchmarking:

  • Benchmarks should reflect a combination of the high standards associated with your area or program, tempered with a realistic expectation of performance. For instance, what level of satisfaction is acceptable for users of your service or program? What do you realistically expect to see in the results?

  • Setting your initial benchmarks is like forming a hypothesis about the level of performance you expect to see, based on the information you have - your assessment will test that hypothesis and show how that performance actually compares to your expectations.
  • Not necessarily. Performance may be expected to vary for different reasons across any administrative area or across multiple outcomes or goals. For instance, if students in an instructional area knowingly struggle with writing skills, then the expected benchmarks that are set for the levels of this outcome might not be as high as the expected levels for other outcomes. For administrative areas, there may be an outcome or goal that is new, or a service that is still being developed - the performance expectation for this service might not be initially as high as the expected benchmark for other, more established services. In contrast, if you know that students perform very high in a certain area, or that a service or tool typically exceeds expectations, then this specific benchmark might be set at a higher level than the others.
  • Purposely setting benchmarks low in an effort to make the actual results look good only hurts your program/area in the end – don’t try to manipulate the results beforehand. The purpose of assessment is to discover the actual level of performance in order to identify strengths and/or weaknesses that can be acted upon (i.e., improve areas that may need support or innovation and to refortify areas that are really successful).
  • Faculty and staff who make up the program or area being assessed set the benchmarks - a benchmark discussion should be initiated that involves input from all applicable program/area personnel which reflects both desired and expected levels of performance. The results of this discussion should act as the basis for the final benchmark levels.

It’s also important to remember that benchmarks are not set in stone and can be revised, if necessary, over time (this is usually done as part of the Year 3 CAR recommendation updates or during the CAR process itself). Revisions can also take place by contacting the Office of Assessment.

Please see “Assessment in Practice” below for an example of Administrative Assessment benchmarking.

Assessment in Practice

Benchmarking in Administrative Assessment

An administrative area at the College provides support services to students. Staff members in this area are required to perform assessment for the goals associated with the services that they provide. They are trying to determine the benchmarks that they will use to assess the performance outcomes for their services.

  • The area members create a rubric for each goal that their area is meant to achieve and the specific outcomes that support that goal based on their goals & outcomes map (see “Examining Goals/Outcomes and Alignment with the College Mission” above.)
    • For example, one of the goals of this area is to support student academic success – what services demonstrate possible outcomes for this goal?

  • They create a list of services to be assessed that reflect/fulfill these goals & outcomes.
    • The staff list a tutoring service that is aimed at improving student performance for underperforming students which supports the student success goal above.

  • 3 levels of performance measures are created that will be used to assess the services listed:
    • Exceeds expectations
    • Meets expectations
    • Fails to meet expectations

  • They consider the following questions as they discuss the benchmarks for each outcome:
    • What does it mean for the service to “exceed expectations” with respect to the outcome/goal? What does it mean to “fail to meet expectations?” This is the act of determining how to measure these levels.
    • How do we think each service is performing?
    • Now, decide on a percentage range of performance that identifies these criteria.
      • For example, the staff members plan to conduct a survey for underperforming students that have accessed their tutoring service. They will ask if the students’ grades have increased as a result of the tutoring. The staff members expect to find that the service “meets expectations” for 75% of the students that use the service (the student has reached a passing grade), 15% will “exceed expectations” (the student has raised their grade above a passing level), and that 5% will “fail to meet expectations” (the student has shown no improvement.) 

 

Your Administrative Assessment Plan

After completing the activities above, everyone in your area should have discussed the services that reflect the outcomes for your area goals, created performance measures to assess your services, and created benchmarks to reflect these measures. You are now ready to put these pieces together to finalize your administrative assessment plan.

The worksheet below represents an example of how goals, outcomes, instruments, and benchmarks are reflected in an administrative assessment plan template:

Goal
Broad statements of what the office/unit wishes to accomplish.
Outcome
Related to your goal – what do you wish to assess? What are the intended results of your goal?
Tool or Measure
What instrument(s) are you using to measure your success? (e.g., surveys, interviews, focus groups, completion times, counts, etc.)
Benchmark
What is the identified/determined minimum result, target, criterion, or value that will represent success for achieving this outcome?
       
Provide tools for acquiring materials needed for conducting college business
(Financial Area)
Increase the number of eligible staff using the P- card The number of staff using the P-card currently and the number after the initiative Increase the number by at least five people
Provide a process for students to apply to the college
(Admissions)
Reduce the number of steps involved in the admissions process for potential students Measure of current number of steps and number of steps after revision Reduce the number of steps by at least four

Next step: After the plan is implemented and data are collected, the data will be analyzed to determine the performance levels for the outcomes. There should be a discussion of these results among area members, which should then prompt the creation of planned actions/next steps to address outcome areas that may need support or improvement.

Once these discussions are completed, this information should be reflected in your area’s CAR report in year 6 by submitting the comprehensive table (the plan above and the two additional columns below) in the assessment section of the report:

Results or Findings
What did you find from using your assessment measurement? Did it meet your standards of performance or given benchmarks?
Next Steps
What are your planned actions based on the results? Using what you found, what will you do next? Craft recommendation(s) for next steps.
   
   

To help facilitate the administrative assessment process, a worksheet template has been created to assist with group discussion, unit assessment, or committee goals for completion of the CAR assessment section. A copy of this worksheet has been placed below. If you would like a copy of the editable template for this worksheet, please access the Assessment Repository in Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”)

 

Goals and Outcomes

Goals - Goals are just general statements about what an office or area wishes to accomplish and align with the key purpose and function of an area.

Outcomes - Outcomes are the intended results of a plan or action. Through examining outcomes, areas can answer the question: what was the impact of the program or service?

Process Outcomes - During Administrative assessment, areas can develop process outcomes which examine the steps or components of a service or program to improve the experience for users.

Learning Outcomes – Learning outcomes assessment focuses on specific outcomes based on what a student was expected to learn as a result of participating in a program.

Evidence Gathering & Analysis

Evidence gathering for administrative assessment should involve the collection of data that reflects your assessment methods/instruments discussed above for each goal. The results derived from these data are then analyzed and reported in the College Area Review (CAR) Report in Year 6.

Data Analysis: Making Sense of the Results

For many of us, the idea of combing through data sets can be a little overwhelming and many ask for guidance on what it means to “analyze the data.” First, it’s important to remember the basic purpose of data collection in assessment activities: to get an idea of how something is performing. As an educational institution, we are interested in learning whether our students, a course, a program, or the College as a whole, is achieving certain learning outcomes or goals. You already know the information you’re collecting to gauge performance (see “Your Assessment Instrument” above), and the benchmarks that represent the levels of that performance (see “Benchmarks” above), so now it’s time to look at what the data are telling you about that performance.

As you and your colleagues go through the data, be sure to make detailed notes (percentages, etc.) to aid in the discussion of how your students are performing. This will also help provide information for the assessment section of your CAR report and your discussions for recommendations/planned actions for improvement.

Please see the following steps below as a guide for analyzing your results.

Examining the data

Step 1:
Take a look at the performance results from a broad level: overall, what do the benchmark results look like? How did your program or area perform as a whole? Were the overall benchmarks met in each of the goal categories? By how much? Are there any red flags that stand out with the overall results?

Step 2:
Now begin looking at the performance results within each specific area, outcome, or goal that was measured. Is there a specific area where performance excelled? Is there an area that indicated unsatisfactory results or where improvement/innovation would be beneficial to meeting your program or area’s goals?

Step 3:
Next, compare the results from this performance assessment to the prior performance assessment. What changes are present? Did attainment of an outcome or goal show signs of improvement? Are there any indications of weakness or decline in meeting a particular outcome or goal?

Step 4:
Finally, based on the data summary provided to you, examine the data according to demographic information and any specific information that you might have requested for your data analysis. Try to discover any obvious performance divisions, as well as nuances in the data (meaningful differences that might be subtle, but important). For instance, try to determine if there are any gaps that need to be addressed, and/or where performance may be exceeding expectations.

Final Step:
Compile your notes and share/discuss the results with colleagues. This analysis will provide the basis for any upcoming assessment reports and for the planned actions that your group will create to address performance levels to aid in student success and for meeting institutional goals. Everyone involved in the area should be informed of the data results and should also be involved in any discussions for future planned actions/recommendations.

 

Recommendations & Planned Actions for Improvement

Creating Recommendations/Planned Actions for Improvement

Planned actions are actions created by faculty or staff members to address any weaknesses or gaps that have been identified in the attainment of a particular outcome or goal. These actions should be specific, measurable, and clearly defined. They should also be timely (promptly executed) within a reasonable time frame (i.e., starting next semester, etc.).

Sharing Results & Tracking Improvement

The Important Concept of “Closing the Loop"

While some may associate the assessment process with data collection and writing reports, these activities represent only half of the process of assessing student learning. One of the most important aspects of assessment is the expectation that results garnered from the process will then be interpreted and utilized to initiate positive, measurable change for improvement. This critical, final step of analyzing and implementing action to improve student learning, is known as “closing the loop” of the assessment/reassessment cycle. Closing the Loop is reflected in the Year-3 Recommendations Updates and as part of the College Area Review (CAR). These positive, measurable changes are represented in assessment reports as “planned actions” or “recommendations.” These actions are formulated through discussion by faculty, staff, and administrators and are intended to be “action plans” for immediate implementation (respectively). Feedback on proposed actions is then provided by assessment committees and administrative leaders.

The final step in “Closing the Loop” is to review the results (data collected) after these actions have been implemented to determine what impact they have had on student success, thus initiating the assessment cycle (or loop) once again for improvement.

The Important Concept of Closing the Loop. Detailed description directly below this graphic

Implement changes based on your findings

  • TAKE ACTION
  • CREATE A PLAN
  • COLLECT DATA
  • ANALYZE RESULTS

Reporting for Administrative Assessment & Special Programs

The reporting requirements for the assessment of administrative and special programs consist of providing CAR recommendation updates in Year 3 and completing the CAR report in Year 6. CAR recommendation updates are reviewed by the Office of Assessment and the CAR report is reviewed by the College Area Review Committee. Therefore, assessment reporting requirements consist of:

  • Year-3 CAR Recommendation Updates
    • Due Date: October 1st (at the end of the 3rd year in the area or program’s designated cycle)
  • Year-6 College Area Review (CAR) Report
    • Due Date: October 1st (at the end of the 6th year in the area or program’s designated cycle)

*For a copy of the CAR Recommendation Update Form for Year 3, please access the Assessment Repository in Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”) For more information on the requirements for the CAR report, please see the section in the handbook titled “The College Area Review.”