Section 3: Assessment in Practice: Academic Program Outcomes Assessment
Introduction & Overview
Program outcomes assessment includes the important steps of defining and then collecting evidence about program learning outcomes of a program. This process leads to improvement or validation of student-focused best practices. Program faculty reflect on the evidence collected and draw conclusions about the meaning of the evidence, develop a plan to improve student performance, and then continue to collect evidence to determine the effect of the changes.
Faculty involved in assessment need to be made fully aware of all aspects of the data gathering and outcomes assessment reflection plan prior to the start of the semester in order to plan their course(s) without any mid-semester adjustments needed to accommodate the assessment process.
This is a circular flow graphic with one-way arrows between the below sections:
- Define or update SLO's
- Collect student data
- Reflect & draw conclusions
- Planfor improvement
- Collect student data
- Determine effect of changes (this area feeds back into the 'Define or update SLO's' section)
Who participates in Program Outcomes Assessment?
College programs, whether they are academic or administrative, are required to perform assessment of their program outcomes, goals or initiatives.
- Every academic program that leads to a certificate or degree must report student learning outcomes data and plans for improvement and/or growth to support student success and completion.
- Special programs and academic areas that do not award a degree or certificate are
also responsible for submitting a program outcomes assessment as part of both the
Year-3 Integrated Report and the CAR
- Special Programs are programs designed to support student success and learning that further the mission and vision of the college.
A faculty-based workgroup is convened to complete the reflection document. The makeup of this faculty workgroup is determined by the Dean and Chair and can include staff or administrator(s) if deemed necessary.
Student Learning Outcomes (SLO’s)
The first step in the assessment process is to identify, clarify, and in some cases to create new program outcomes that are specific and measurable. Current program outcomes can be found in the MC Catalog. If clarification or updates are needed, this should be addressed through the curriculum process prior to the CAR, during year 5 of the assessment cycle.
For each measurement (outcome), program faculty will set satisfactory performance criteria. In some cases, faculty will need to establish rubrics that describe acceptable student performance. Program faculty will determine a benchmark, (an expected level of outcome achievement) that will be used to verify the level of achievement for a specific outcome.
Organizing & Completing Assessment Activities
Assessment at MC is intended to be a faculty-driven process. Academic and special programs can choose to organize and complete their assessment functions based on several factors: the size of the discipline and the number of courses offered, the ration of full-time to part-time faculty, and the leadership styles of the program’s Coordinator(s), Chair(s) and Dean.
Below is the most common approach that programs use when completing the yearly reports in the program assessment cycle.
- In many areas it is the program coordinator or the Chair’s designee who will act as the lead for program outcomes assessment activities and will often form a faculty workgroup to gather and analyze data and complete reports for their program. Depending on the Dean and the makeup of the program’s discipline, these workgroups and the discipline leads will either be determined by the dean or the Dean’s designee (Chair, coordinator, etc.)
- The program leads for outcomes assessment will be responsible for the following activities
- Submit curriculum mapping, program assessment plans, and capstone course assignments/rubrics designed to determine student achievement levels and submit the completed form(s) every 6 years as part of the CAR (or as part of a curriculum action).
- Consult with program faculty to formulate a data collection plan to score and upload assessment results during years 1, 2, 4 & 5 of their respective assessment cycle.
- Submit the completed data collection plan(s) for all programs to the Collegewide Assessment Team for review, feedback, and approval.
- Organize and oversee the data collection process.
- Set up regular meetings every semester to discuss progress and assessment results. Discussion from these meetings will inform the Year-3 Integrated Report and the CAR Report.
- In conjunction with all workgroups, prepare and submit all required reports, to include:
- Annual Data Collection / Assessment updates
- Year-3 Integrated report
- Year-6 Program Outcomes Assessment (incorporated into the CAR report)
- Regardless of the program’s approach to assessment, the following applies:
- The Chairs are responsible for providing support and guidance for faculty members (as needed) throughout the assessment process and may, in conjunction with faculty members, assist with the assessment activities (when applicable).
- The Deans are responsible for ensuring that any assessment requirements for their areas are completed and submitted by the required due dates.
Curriculum Mapping: Choosing Courses that Best Support your Outcomes
By mapping required course outcomes to program outcomes, noting which topics are introduced, reinforced, and applied throughout the curriculum, faculty will begin to see what knowledge, skills and attitudes are currently taught. This information will be the basis for a discussion among faculty as they work to define what they want the program to achieve.
List Course Name and Number (BIOL 110) |
Program Learning Outcomes | |||||||
---|---|---|---|---|---|---|---|---|
Learning Outcome 1: | Learning Outcome 2: | Learning Outcome 3: | Learning Outcome 4: | |||||
Level | Assessment | Level | Assessment | Level | Assessment | Level | Assessment | |
BIOL 101 | I | P | I | E | I | PO | I | O |
BIOL 295 | M | P | R | E | M | PO | R | O |
Choosing appropriate course(s) in the program curriculum (where students are expected
to demonstrate their achievement of the most advanced levels of program outcomes),
provides a way to measure student performance for program expectations. A method of
assessment that can provide measurement of acceptable performance for a student majoring
in the program will provide evidence that can be collated, reported, and discussed
by faculty as part of the Program Outcomes Assessment report (See: Capstone Assignments, next section).
The goal of the MC assessment process is to produce evidence that the faculty and administration will find credible, enlightening, and applicable to decisions that need to be made. By gathering direct evidence of success and in students’ learning a program can see and react to trends by making adjustments and improvements.
By gathering student outcome achievement data regularly and merging it with demographic data like gender, age, race, and even their success-rate or completion of prior coursework, the faculty workgroup will gather important insights that can help to identify an area of concern. The next step is to determine and implement strategies for addressing these concerns.
For more information on curriculum mapping, please visit the Assessment Repository resources in Blackboard.
FROM THE AALHE's 9 Principles of Good Practice for Assessing Student Learning
"The point of assessment is not to gather data and return 'results', it is a process
that starts with the questions of decision-makers (faculty) that involves them in
the gathering and interpreting of data, and that informs and helps guide continuous
improvement."
Graphic shows a series of interlocking gears containing the following three gear titles:
- Faculty Decision Makers
- Collect and Reflect
- Plan for Continuous Improvement
Goals, Student Insight, Data
Valuable data points are available once assessment scoring takes place. By linking student scores to M numbers, numerical data are generated that not only include information on student performance, but also demographic information. This information can help the workgroup consider the implications of possible outside factors on scores and should lead to strategies and plans to improve specific outcomes for specific groups of students.
Special programs and academic areas that do not award a degree or certificate can enter scores and access integrated data for courses, special projects, and/or enrollment trends. These Programs are expected to reflect on the relationship between program goals and student success data, and to make recommendations for improvement during the Program Assessment portion of the CAR.
Student Feedback
The College Area Review (CAR) requires a channel for student feedback to be considered by faculty as part of that review. Some type of student survey or feedback mechanism should be part of your assessment planning. Polling students on attitudes and challenges related to collegewide support and achievement like academic planning (SAP), program advising, transfer plans, financial aid and program improvements could lead to necessary, and sometimes simple changes in policy or strategy that help a student navigate, improve instruction, enrollment and/or graduation rates.
A survey or feedback channel is also a source for the topics covered in the Year 3 Integrated Report:
- Discuss how your Program Outcomes assessment activities (planned data collection,
data results, any new proposed actions, etc.) will relate to EACH of the following
goals:
- Improvement/achievement of your discipline/program goals Achievement of MC’s 2025
goals
- Briefly discuss the actions that your program will take to improve retention and enrollment
Creating Your Program Outcomes Assessment Plan
Your Assessment Instrument
Assessment instruments are the tools that will be used to collect data that reflect student performance. These may take many forms and depend on the discipline, course material, and the competencies or outcomes being measured. Examples include, but are not limited to, the following: written exams, oral exams, practical exams/exercises, papers, course assignments, lab exercises, capstone projects, high impact practices, etc.
Questions to consider when choosing an assessment instrument:
- Does the assignment or exercise truly reflect student performance for the specific outcome you want to collect data for? (Validity)
- How can it be measured? (Scoring Scale? Correct/incorrect answer? Rubric with defined parameters? etc.)
The key to a successful implementation of a Program Outcomes Assessment Plan is clear and timely communication with all participants. This could include faculty (FT & PT) who will be teaching the program’s capstone course(s) and the courses that support it, or staff and administrators in a special program. The Program Assessment Plan determines “what” will be assessed and benchmarks that indicate the degree of success achieved by students.
Benchmarking
Benchmarking refers to the process by which a discipline or program sets a standard of measurement (or benchmark) for evaluating or comparing student performance during and after the assessment and reassessment process. For instance, what percentage of students is expected to attain expected outcomes, or exceed expectations? What percentage of students may fall below standards of success?
Please see the quick tips below regarding benchmarking:
- Benchmarks should reflect a combination of the high academic standards of your discipline, tempered with a realistic expectation of success among your students. A discipline’s faculty may want 100% of their students to place in the “advanced” performance level for quantitative analysis, but while this unrealistic benchmark might be possible for some programs, it would realistically be an unprecedented event for most. Setting your initial benchmarks is like forming a hypothesis about student performance based on the information you have - your assessment will test that hypothesis and let you know how your students are actually performing.
- If students in your discipline knowingly struggle with writing skills, then the expected benchmarks that your discipline sets across the “advanced” and “proficient” levels of this competency might not be as high as some expected percentages in your other competencies. On the other hand, if your students normally exceed with oral presentations, then the oral communication benchmark might be set a little higher. Benchmarks should blend the academic rigor of your course with expected student performance levels.
- Purposely setting benchmarks low in an effort to make the results look good only hurts your students in the end – don’t try to manipulate the results beforehand. The purpose of assessment is to discover how students are actually performing; an important part of this is finding out where the strengths and weaknesses of that performance may be in order to act on those results to improve some areas to better support student learning, refortify areas that really shine, and let your students show how successful they can be.
- Faculty who are teaching the courses to be assessed know their students the best - a discussion should be initiated around benchmarks that reflect both desired and expected levels of student performance.
Evidence Gathering
Evidence gathering for academic programs at MC consists of scoring and student performance
levels on program outcomes. The results derived from these data are reported in both
the Year-3 Integrated Reflections Report and the College Area Review (CAR) Report.
Creating Your Data Collection Plan
The process of gathering (or collecting) program outcomes “data” refers to the actual collection of student assignments or projects to be “scored” in accordance with your discipline’s program outcomes assessment plan. This collection of student performance across each competency takes place during years 1-2 and years 4-5 of your discipline’s assessment cycle and follows the data collection plan created by your discipline. Once the data are collected, the act of “scoring” or evaluating student performance takes place, to be uploaded to the MC assessment system for analysis (see “Scoring & Uploading your Collected Data” – next section.)
For example, certain disciplines might only collect data on an annual basis and might only score a random sample of the student assessments because of the sheer numbers of students in their program. But smaller disciplines/programs with fewer students in their key program courses might collect data every semester, or every time a capstone course is run, to ensure that scoring is sufficient to be reflected upon and to determine possible plans for improvement.
Since faculty are the experts for their specific courses and requirements, it is up to the discipline coordinator and faculty members teaching in the discipline to determine the plan for collecting data withing years 1-2 and 4-5 of their designated assessment schedule. The Data Collection Plan should be created with careful consideration of the following:
When creating your data collection plan, you should:
- Examine when and how the assessment instrument is assigned (semesters, campuses, modality, etc.) in order to capture a diverse and rich sample of student performance for each program outcome across different settings if possible.
- Ensure that data collection encompasses an appropriate number of course sections and that the number of students per section is suitable for an assessment sample (see sampling below.)
- Take into consideration the faculty members who may be teaching multiple sections of a given course and weigh any potential effects that might be created from additional time requirements/duties regarding scoring responsibilities.
- Guarantee that the appropriate data will be collected, scored, and available for analysis/discussion in order to meet the requirements and deadlines for completing the Year-3 Program Outcomes Assessment Reflection Report and the 6-Year Reflection Report (incorporated into the CAR).
How much data do we need to collect?
*In addition to the information below, please also refer to the data chart in Appendix C for guidance on how much data may be appropriate for your program.
The amount of data to collect depends on the size and number of sections and students completing the program. Some common data collection questions/answers have been provided below for guidance; however, when in doubt, please contact the Office of Assessment for assistance in determining the appropriate amount of data for your discipline.
Quick Answer:
*No. Although data are required to be collected/scored during each of these 2-year periods, determining when to actually do the scoring during these periods is decided by the individual discipline.
The requirement is that the appropriate data are collected and assessed prior to each
reporting period (before Year 3 & again before year 6) when data analysis and the
writing of reflections take place.
Flexibility is provided to disciplines within these 2-year periods to determine the most useful schedule to meet this requirement. *However, if your discipline happens to have a small number of course sections and the enrollment numbers within these sections are small, data collection/scoring may be required each year. For example, if your discipline’s upper-level course(s) where the assessment instrument is scored are only offered during one semester each year and they have less than 30 students enrolled, you would want to collect/score all students during both years of each 2-year data collection cycle to ensure that you have an adequate amount of student work to assess for your reflection reports in Years 3 & 6.
Quick Answer:
It depends on the number of sections and students enrolled in the course(s) for which
collection/scoring is taking place. For instance, if your discipline only offers 2
sections of a course with approximately 25 students per section during your planned
collection year, then you may need to assess all students within both sections. On
the other hand, if 15 sections of the course(s) where the assessment instrument is
scored are offered during your collection year with approximately 25 students per
section, your discipline could actually collect a “sample” of student work from across
all 15 sections (please see “sampling” below.)
Quick Answer:
*Yes (*but they should not be revised during a 2-year collection period without first
obtaining approval from the Office of Assessment). Data plans are intended to reflect
the most effective way to collect data for the individual discipline and should be
updated when necessary. There are 3 ways that data collection plans can be changed:
1) as an intended action that is incorporated into a discipline’s Year- 3 Reflection
Report or CAR Report; 2) as part of the general education recertification process;
or 3) with consultation/approval from the Office of Assessment.
Please see “Assessment in Practice” below for an example of a program outcomes data collection plan.
Practical Example: Program Outcomes Data Collection Plan
Here is an example of a data collection plan for a program with 5 program outcomes. Notice that the plan calls for gathering data from two separate courses (assessment instruments) to determine the levels of success on program outcomes (if your program also has Gen Ed courses, you will need to incorporate data collection for both program and Gen Ed – see Appendix B for an example.)
*The graphic above is an example not actural data.
*Important Note: Certificate and degree programs are required to provide quick updates on their program outcomes data collection and assessment activities each year by completing the Interim Data Collection Update/Annual Report (see the section on “Reporting and Planning for the Future” for more detail). These quick reports are due by August 1st each year during Years 1-2 and 4-5 within a discipline’s defined 6-year assessment cycle.
*An editable copy of this worksheet can be accessed through the Assessment Repository in Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”)
Your Comprehensive General Education Assessment Plan
The Program Assessment Plan form provided can be filled in with the program name, discipline name, contact information and Dean. Each outcome is listed down the left-hand side, exactly as it is published in the catalog. The form provides examples of how to fill out these four columns of information:
- Methods of Assessment: What will the assessment instrument be? What will you use to assess the students?
- Example: An essay paper, a project, an oral presentation, a portfolio.
- Assessment Course: In what class or event will information be gathered to assess this outcome?
- Example: TVRA 260 Video Production Portfolio
- Assessment Scoring: What score or level of performance is considered to be acceptable for a student majoring
in the program?
- Example: An acceptable level is for students to earn 85 out of 100 points on the project.
- Benchmark or Expected Level of Outcome Achievement: What percentage of your students do you expect to meet the criteria of success for
this outcome?
- Example: We want at least 75% of the students to receive 85 points on the project.
The illustration below represents an example of how outcomes, methods of assessment, scoring and benchmarks are reflected in a program assessment plan.
Assessment Plan
Student Learning Outcome | *Methods of Assessment | Assessment Course | Assessment Scoring | Benchmark or Expected Level of Outcome Achievement |
---|---|---|---|---|
Official program learning outcomes as listed in the Montgomery College catalog | What will the assessment instrument be? What will you use to assess the students? | In what class or event Will information be gathered to assess this outcome? | What score or level of performance is considered to be acceptable for a student a majoring in the program? | What percentage of your students do you expect to meet the criteria Of for this outcome? |
Example: Students will be able to apply principles of evidence-based medicine to determine clinical diagnoses | Example: An essay paper | Example: CYBER 101 or 2nd Annual Juried Art Show | Example: An acceptable level is for students to earn 85 out of 100 points on the project | Example: We want at least 75% of the students to receive 85 points on the project |
Students will be able to Successfully record video and audio in studio and on location with various professional cameras, microphones, lights and recording devices. | Evaluation of video portfolio. | TVRA 236 Radio Station Operation or TVRA 260 Video Production Portfolio | An acceptable level is for students to earn "Proficient" on the "Technical Skills" section of the Digital Media Portfolio Rubric. | We want at least 80% of the students to meet expectations and 10% of the students to exceed expectations on the "Technical Skills" section of the Digital Media Portfolio Rubric. |
Data Scoring and Analysis
Scoring & Uploading Your Collected Data
Before You Score...
Before any scoring takes place, it is strongly recommended that all participants in the scoring process meet to discuss how to interpret and evaluate the data and to know what is expected of them. There are often differences of opinion regarding what values to assign for a student’s work and it is imperative that all faculty members are in agreement to ensure consistency in scoring to produce the useful data. It is suggested that at least one piece of student work is scored together and discussed as a group before further scoring takes place to promote inter-rater reliability.
Scoring student work for Assignment
Depending on the size of the discipline and the number of certificate and/or degree programs, a group of faculty representatives from the discipline (versus every individual instructor) may be organized to complete the final scoring and uploading of program assessment data.
Scoring (or rating) the performance of students in your program is an important evaluation process to determine how well students are successfully attaining program outcomes. The scoring process for program outcomes involves the use of predetermined measures to assess the performance level demonstrated by students on designated assignments/exercises/etc. The scoring criteria and benchmarks for student performance that are utilized during this process are determined by faculty who teach in the program and can be found in the Program Assessment Plan (see “Benchmarking” above for more information.)
While scoring and grading can be interrelated, there are important differences between these two functions. All participants involved in the scoring process should be made aware of these differences to ensure that the scoring process produces the most accurate and useful data results possible. For an explanation of how “grading” and “scoring” differ, please see “Why Grades are not Enough” on page 1 (recommended reading for all faculty involved in assessment activities).
Uploading the Data
- Once scoring is complete for a given data set, designated faculty members will enter the results into the designated area specified by the Office of Assessment to be stored for immediate and/or future analysis.
- Collectively, these data provide an important holistic view of student performance on program learning outcomes across the discipline.
- Statistical analysis will be performed on the data by the Office of Assessment and/or lead faculty members, resulting in aggregated results that are NOT identified with any instructors. These results should then be provided to all faculty members teaching in the program for analysis and reflection. (Please see next section “Data Analysis: Making Sense of the Results” for more information.)
Data Analysis: Making Sense of the Results
Step 1:
Take a look at the performance results from a broad level: overall, what do the benchmark results look like? How did your students (or the course, program,
etc.) perform as a whole? Was the overall benchmark of proficiency met for each of
the outcomes? By how much? Are there any red flags that stand out with the overall
results? If a test was used, did students perform better on certain test items than
others?
Step 2:
Now begin looking at the performance results within each specific area, competency, or outcome that was measured. Is there a specific area where performance excelled? Is there
an area that indicated unsatisfactory results/where students appear to be struggling?
Step 3:
Next, compare the results from this performance assessment to the prior performance assessment. What changes are present? Did a category/competency show signs of improvement? Are
there any indications of weakness or decline in a particular category/competency?
Step 4:
Finally, based on the data summary provided to you, examine the data according to demographic information and any specific information that you might have requested for your summary (i.e.
male v. female, full-time students v. part-time students, online v. face-to-face,
etc.). Try to discover any obvious performance divisions, as well as nuances in the
data (meaningful differences that might be subtle, but important). For instance, try
to determine if there are any performance gaps that need to be addressed, and/or where
performance may be exceeding expectations.
Step 5:
Compile your notes and share/discuss the results with colleagues. This analysis will provide the basis for any upcoming assessment report summary
and for the planned actions that your group will create to address performance levels
to aid in student success.
Everyone teaching the course that was assessed should be informed of how students performed and be involved in any discussions for future planned actions relating to assessment.
This communication should include:
- the purpose in assessing learning outcomes
- the learning outcomes that are being assessed
- the common assessment instrument to be administered
- when the assessment is to be administered during the semester
- what students should be told about the assessment and its purpose
- the common rubric or answer key to be used in scoring the assessment
- how to enter scores into the scoring spreadsheet that will be provided
- the fact that assessment results will never be reported in a way that could reflect on the performance of an individual faculty member or student
- and of course, the results and specific actions that the discipline/program plans to implement for improvement.
Some departments have prepared and distributed a memorandum to all faculty who will be participating in the assessment that provides information on the items listed above. An example of such a memorandum is included in Appendix D: Memo: Assessment Data Collection - Beginning of semester “Heads up”
Summary Reflections & Planned Actions for Improvement
What is a Summary Reflection?
College programs, whether they are academic or administrative, are required to perform assessment of their program outcomes, goals, or initiatives as they relate to the MC mission and strategic plan. To document the important faculty discussions and planned actions that come from analysis of the data collected, the program workgroup submits a Summary Reflection form (see step 5 of “Examining the Data”, page 42.) These notes and results are used to complete this important part of the Year 3 summary report and the CAR report. Capturing this collaborative thinking is what makes assessment valuable and effective. Summary reflections are revisited in year 3 (by October 1) and year 6 (by October 1) when there is new data to compare and when it is time to reflect on the effectiveness of changes to instructional practices that were recommended as part of the previous summary.
This part of “Program Assessment” gives faculty and others involved a voice in the evaluation and analysis of the data and the ability to suggest or support new recommendations based on the analysis of data. Collegewide initiatives and goals should be part of this reflection. The reflection report includes the number of students assessed, the semester(s) that data was collected, a summary of the data including percentages and demographic results, and a comparison of results to previous data collection periods.
This summary is repeated for each program outcome. The last portion of the summary report asks you to reflect on planned actions from your most recent report (3 years ago) and determine if those recommendations should continue or be updated. There is an opportunity to add planned actions based on the most recent data. This report is then reviewed by the Collegewide Assessment Team (CAT) and feedback is provided to faculty members.
Creating Planned Actions for Improvement
Planned actions are actions created by faculty members to address any weaknesses or gaps that have been identified in student performance. These actions should be specific, measurable, and clearly defined. They should also be timely (promptly executed) within a reasonable time frame (i.e., starting next semester, etc.).
VAGUE / GENERALIZED
“We plan to explore options that might increase student performance on lab reports.”
“Faculty will help students during the semester with improving their communication skills.”
SPECIFIC / MEASURABLE
“Faculty will develop and implement in-class exercises designed to increase student performance on lab reports.”
“The course ARTT 140 will include a module designed to improve communication skills in the area of arts management.”
Please see the examples below for “Assessment in Practice.”
Zoology faculty have analyzed the results from their capstone assignment(s) data and have discovered that their students are excelling in many of the outcomes but seem to be struggling on the fourth outcome – “…communicate scientific information through effective formal and informal writing”.
Practical Example:
The discipline faculty have developed planned actions that they believe will support student improvement in these areas. Please see the examples below:
Faculty will develop four quick interactive exercises in the classroom that aid students in improving their scientific writing skills. These exercises will be administered by all faculty beginning next semester. Two exercises will be administered to students before the midterm exam and two exercises will be administered after the midterm exam. This will help faculty assess how effective the exercises may be over the course of the semester in order to adjust the timing (if necessary) to align with specific content.
In consultation with librarians, faculty will create a library research guide for Zoology courses that students can access online. These guides will provide information and resources to aid students in developing scientific vocabulary and targeted research skills.
Faculty will set up a 1-hour session in the classroom for librarians to present information to students about conducting research online.
Sharing Results & Tracking Improvement
The Important Concept of “Closing the Loop"
While some faculty may associate the assessment process with data collection and writing reports, these activities represent only half of the process of assessing student learning. One of the most important aspects of assessment is the expectation that results garnered from the process will then be interpreted and utilized to initiate positive, measurable change for improvement. This critical, final step of analyzing and implementing action to improve student learning, is known as “closing the loop” of the assessment/reassessment cycle. Closing the Loop takes place at MC as part of the Year-3 Integrated Report and as part of the College Area Review (CAR). These positive, measurable changes are represented in assessment reports as “planned actions” or “recommendations.” These actions are formulated through discussion by faculty and administrators and are intended to be “action plans” for immediate implementation (respectively). Feedback on proposed actions is then provided by assessment committees and administrative leaders.
The final step in “Closing the Loop” is to review the results (data collected) after these actions have been implemented to determine what impact they have had on student success, thus initiating the assessment cycle (or loop) once again for improvement.
Implement changes based on your findings
- TAKE ACTION
- CREATE A PLAN
- COLLECT DATA
- ANALYZE RESULTS
Reporting for Program Assessment
The reporting requirements for the assessment of programs consist of annual updates during years 1, 2, 4 & 5 for each certificate or degree and/or special program for its assessment activities, and a report every three years that analyzes student performance on program outcomes or goals. Assessment reports are evaluated by the Collegewide Assessment Team and reviewed by the College Area Review Committee and as part of a program’s CAR report in Year 6.
- Map Course Outcomes or Program Goals - Planning
- Devleop Assignment and Rubric for Assessment - Planning
- Determine benchmark/Expected achievement - Planning
- Collect data - (UPDATES)
- Reflect and report - YEAR 3 REPORT
- Implement actions for improvement - (UPDATES)
- Collect data - (UPDATES)
- Reflect and report - CAR REPORT
Assessment reports for Program Assessment consist of:
- Annual Update: Interim Data Collection & Assessment Report - (Year 1, 2, 4 & 5)
- Year-3 Program Assessment Integrated Reflections Report
- Year-6 Program Assessment Reflection Report (as part of the discipline’s CAR report)
*For detailed descriptions of information required for each report, please refer to the individual report sections below. To view a template of each report, please visit the Assessment Repository on Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”)
Due Date: October 1st at the end of the cycle year
Any discipline that has a degree or certificate program is required to complete an annual assessment update report at the end of years 1, 2, 4, and 5 in the assessment cycle (This report is not completed in Year 3 or Year 6.) These reports are due by August 1st at the end of the assessment reporting year for that specific discipline or program. These quick updates (usually under 5 minutes to complete) provide information on whether the discipline or program has collected student learning outcomes data and provides information on any other assessment activities that the discipline or program may have worked on during the previous year.
This report is intended to act as a useful tool for faculty when they engage in their Year-3 Reflection report and their College Area Review report. The information recorded in these reports is ideally “pulled forward” and can assist discipline/program faculty by providing information and context for the assessment activities that have taken place over the past 3-year or 6-year period. These reports also provide important information on assessment activities across the College for the Office of Assessment.
What Questions make up the Interim Data Collection/Assessment Update?
NOTE: most questions on this report provide a drop-down menu for quick answer selection.
To complete the update report, you will need the information listed below. NOTE: the information listed with ** below relates to collected data – if no data collection took place, the blue bullet points will not apply.
- The academic assessment year for which the current report is being completed (i.e.,
2021- 2022, 2022-2023, etc.).
- Remember that you are providing an update on the activities that took place over the
past year (For instance, if the report is due by August 1st, 2022, then the reporting year would be 2021-2022.)
- Remember that you are providing an update on the activities that took place over the
past year (For instance, if the report is due by August 1st, 2022, then the reporting year would be 2021-2022.)
- The name of your discipline and/or program.
- An individual report should be completed for each discipline and/or individual program
unless prior approval has been granted to combine reports (data collection and other
assessment activities may vary between programs in the same area and the information
provided needs to be accurate and complete for each individual program.)
- An individual report should be completed for each discipline and/or individual program
unless prior approval has been granted to combine reports (data collection and other
assessment activities may vary between programs in the same area and the information
provided needs to be accurate and complete for each individual program.)
- Your discipline/program’s cycle year for the report (i.e., Year 1, Year 2, etc.).
- This is the year of the cycle that your discipline/program was completing when the
reported activities took place (within the overall 6-year assessment cycle). For instance,
Year 1 is the year immediately following the CAR report; Year 4 is the year immediately
following the Year-3 Reflection Report, etc. (See the combined schedule in Appendix A for your place in the cycle.)
- This is the year of the cycle that your discipline/program was completing when the
reported activities took place (within the overall 6-year assessment cycle). For instance,
Year 1 is the year immediately following the CAR report; Year 4 is the year immediately
following the Year-3 Reflection Report, etc. (See the combined schedule in Appendix A for your place in the cycle.)
- The type of update report to complete. Reports can include updates on Gen Ed assessment, program outcomes assessment, or
both (depending on whether your discipline has Gen Ed courses and/or a degree or certificate
program).
- If your discipline has both Gen Ed courses and a degree/certificate, update reports for both Gen Ed assessment and program updates
for each program are required by August 1st, but each report can be done separately, at different times (if desired).
- If your discipline has both Gen Ed courses and a degree/certificate, update reports for both Gen Ed assessment and program updates
for each program are required by August 1st, but each report can be done separately, at different times (if desired).
- Whether your discipline/program collected data during this reporting period.
- Remember that this report is just an update on your assessment activities – your data
collection plan may not have your discipline/program scheduled for data collection
(scoring of data) during the reporting year that you’re completing. If not, indicate
that no data collection took place for this particular question, and move on to the
other questions on the update report (you will be guided past the additional data
collection questions.)
- Remember that this report is just an update on your assessment activities – your data
collection plan may not have your discipline/program scheduled for data collection
(scoring of data) during the reporting year that you’re completing. If not, indicate
that no data collection took place for this particular question, and move on to the
other questions on the update report (you will be guided past the additional data
collection questions.)
- *General data collection information (*only if data collection took place).
- ** Total number of students assessed (overall number of students across all sections of the course/program where data
collection/scoring took place). For example, you might indicate that 60 students in
total were assessed if you collected data/scored students across 2 sections of a particular
course with 30 students in each section, etc.
- ** The semester(s) when data collection took place (All that may apply: Fall, Spring, Summer, etc.).
- ** The campuses/forms of instruction where data collection took place (All that may apply: i.e., structured remote (Rockville), DL web fully online, face-to-face
Germantown, etc.).
- ** Individual courses where data collection took place (i.e., COMM 108, etc.).
- ** Total number of students assessed (overall number of students across all sections of the course/program where data
collection/scoring took place). For example, you might indicate that 60 students in
total were assessed if you collected data/scored students across 2 sections of a particular
course with 30 students in each section, etc.
- Information on any other assessment activities
- This question provides different ways for a discipline/program to indicate what assessment
activities they have been engaged in over the past year and activities for the upcoming
year
- Several options are presented, and you can choose any (or all) that may apply:
- Data collection
- Implementation of previous planned actions
- Implementation of CAR recommendations
- Other (specify/briefly explain)
- This is an opportunity for your discipline/program to note any other assessment activities
that faculty have been working on in the previous year. These noted actions can also
be used later to help complete the Integrated Reflection Report in Year 3 or the CAR
Report in year 6.
- This is an opportunity for your discipline/program to note any other assessment activities
that faculty have been working on in the previous year. These noted actions can also
be used later to help complete the Integrated Reflection Report in Year 3 or the CAR
Report in year 6.
- Several options are presented, and you can choose any (or all) that may apply:
- This question provides different ways for a discipline/program to indicate what assessment
activities they have been engaged in over the past year and activities for the upcoming
year
- A primary contact for the report
- The person listed will act as first line contact for the Office of Assessment regarding the report that was submitted. This name does not act as a substitute for the number of people that provided input for the report – this is simply a person to reach out to, if necessary, to initiate contact.
(For disciplines with certificate or degree programs)
Due Date: October 1st
The Year-3 Program Outcomes Integrated Reflection Report is completed during Year 3 of a program’s cycle and is due by October 1st at the end of the cycle year. A separate Year-3 Gen Ed Reflection Report is also required to be completed if a program has Gen Ed courses.
What types of questions make up the Year-3 Program Assessment Integrated Reflection Report?
To complete this report, you will need to provide the information listed below. *NOTE: the information listed with ** below relates to collected data – if data collection did not take place, the points with bullets point marked with ** below will not apply. It is highly recommended that you complete your answers in a word document beforehand – you will be able to copy and paste your answers into the report, where applicable.
- The academic assessment year for which the current report is being completed (i.e., 2020-21, etc.).
- The name of your program (choose from a drop-down menu.)
- Did data collection for program outcomes take place over the last two years? (Yes/No).
- If no, the report will skip to question 4. If yes, you will need the following information:
- ** Number of total students, courses & campuses, and type of instruction where data collection took place
- ** A summary of assessment results for each program outcome where data collection took place.
- ** Describe what you have learned about your students, based on the current results
of your program outcome assessment.
- If no, the report will skip to question 4. If yes, you will need the following information:
- Updates on the status of previous planned actions for improvement (from your last
reflections report).
- If data collection took place, you will also need the following information:
- ** Current planned actions for improvement:
- ** Indicate when newly created planned actions above will be implemented and how all faculty will be notified.
- ** Discuss how your program assessment activities (planned data collection, data results, new planned actions, etc.) relate to your program goals & MC’s goals.
- If data collection took place, you will also need the following information:
- Enrollment and Program Awards updates (yes/no questions).
- Has enrollment in your program decreased by more than 20% over the last 3 years?
- Program awards: Has your program awarded at least 15 degrees/certificates in the last 3 years (at least 5 per year)?
- *Based on the above answers, you might be directed to an additional question if program
viability is in question: Briefly discuss the actions that your program will take to improve retention, enrollment,
and/or graduation rates.
-
Upload a completed CAR Recommendations Update Form.
A copy of the form can be accessed on the Assessment Repositorynew window site in Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”) -
Enter a primary contact for the report
The person listed will act as first line contact for the Office of Assessment regarding the report that was submitted. This name does not act as a substitute for the number of people that provided input for the report – this is simply a person to reach out to, if necessary, to initiate contact. - List all individuals who participated in the completion of the report.
Due Date: October 1st at the end of the cycle year as part of your College Area Review (CAR) Report
For the Year-6 Reflection Report, disciplines will complete a reflection form which requires the information below. This form should be attached to the discipline’s CAR report regarding their discussion of student learning outcomes assessment. A copy of the form can be accessed on the Assessment Repositorynew window site in Blackboard (Once in the Repository, click on the folder on the main menu to your left titled “Assessment Resources, Forms, Toolkits, & Templates.”)
What types of assessment questions make up the Year-6 General Education Reflection Report?
- Complete a Summary Reflection that includes each program outcome where data collection
took place (See “What is a Summary Reflection” above.)
- Discuss and provide updates on past planned actions for improvement from your latest
reflections report.
- Based on the last 2-year program outcome assessment period, discuss what your discipline
has learned about your students.
- Indicate whether your discipline will be continuing any successful planned actions
from the previous assessment and list/explain all new planned actions for improvement
pertaining to the data results of student performance for this reflection (see “Creating Planned Actions for Improvement” above.)
- Indicate when the newly created planned actions above will be implemented and how
all faculty will be notified.
- List all of the individuals who participated in the discussion for the reflection
report and provide a contact person for the report.
- Enter a primary contact for the report
- The person listed will act as first line contact for the Office of Assessment regarding the report that was submitted. This name does not act as a substitute for the number of people that provided input for the report – this is simply a person to reach out to, if necessary, to initiate contact.