How-to-Guide: Using the Emergency Aid Rubric

How-to-Guide: Using the Emergency Aid Rubric

Omari Burnside, NASPA / Student ARC Blog / January 05, 2018
landscape analysis of emergency aid

The Emergency Aid Rubric is designed to do the following:

 Rubric Structure:

The rubric is built around six pillars that each contain requirements of an effective emergency aid effort. The six pillars are management, securing resources, policy implications, technology, increasing awareness, measuring success, and policy implications (See table below for descriptions of each area). 

 

PillarKey Question
Management How is the emergency aid offering(s) at the institution organized and implemented?
Securing Resources To what extent does the institution allocate and leverage multiple sources to obtain adequate funding for the emergency aid efforts?
Policy Implications How clear are the requirements, application processes, and guidance laid out for students, faculty, and staff?
Technology To what extent does the institution leverage systems and structures to make administering aid a more efficient process?
Increasing Awareness How are various mechanisms used to inform students, faculty, staff, and external stakeholders about emergency aid efforts?
Measuring Success To what extent does the institution use data to identify the students who could benefit from the aid the most or to assess the impact of the emergency aid offerings?

Completing the Self-Assessment:

Each pillar has associated criteria, which the rubric presents in the form of guiding questions, as well as descriptors for each of the possible ratings. Institutions should rate the degree to which each criterion is met using a four-point rubric scale: forming, emerging, functioning, and exemplary. The score for each item should then be summed to generate total scores for each capacity. 

After completing all capacities, institutions can calculate their overall rubric score by adding the totals for each capacity area. The highest possible score is 84. The overall rubric score provides a rating of the institution’s overall preparedness for creating or sustaining an emergency aid program. 

 

How to Maximize the Emergency Aid Rubric:

  1. Don’t do it alone: Like other assessments, it is helpful to have multiple perspectives when evaluating. To get the most out of the assessment, it is best to have several people who are currently involved or who may be involved in the effort, from a variety of offices/departments (i.e. representatives from financial aid, advancement, student services, etc.), participate in completing the rubric. This will create a more holistic and realistic outcome from the assessment.

  2. Take into account your institution’s culture and resource constraints: The rubric is not meant to be a prescriptive how-to-guide for institutions; the rubric was designed to allow for some flexibility in the implementation of emergency aid programs. Therefore, many of the items in the rubric are merely examples and/or suggestions of what a program could look like at an institution. Consequently, some of the items may not be applicable and/or feasible for some institutions. In the event that this occurs, institutions should first refer to the “Guiding Question” they are currently rating (what is the overall intent of the question?); then consider the institution’s overall vision/goals for the program; and finally make a rating that is more aligned with said vision.

  3. Must pick one rating: When reviewing the rubric, institutions may see aspects of their efforts in multiple ratings (i.e. feeling this particular aspect is both ‘emerging’ and ‘functioning’). While this may be the case, institutions must pick one rating. This is especially important for tracking progress over time and is an example of how conversation between colleagues can be most helpful. Having a discussion that centers around a question such as, “Are we more ‘Emerging’ or are we closer to ‘Functioning’?” can bring about new evidence to support one rating over another. In the case of a deadlock and the institution is unable to come to a consensus, institutions should consider going with the lower rating.

  4. Don’t be afraid of the extremes: The intent of the assessment is to get as close as possible to an accurate picture of the effort. Therefore, institutions should not hesitate to rate themselves “Forming” or “Exemplary.” A rating of “Forming” simply highlights that this is an area of critical need for the institution, not that the institution is ‘bad’ or that it is a widespread issue. Conversely, a rating of “Exemplary” doesn’t mean perfect; it acknowledges that the institution has done a great deal of work in that area and that no additional significant work is needed.

  5. Develop an action plan:  The Emergency Aid Rubric is only as good as the actions to which it will lead. Once an institution has come to consensus on the ratings, the institution should develop an action plan that prioritizes and sequences the needed work ahead.  This requires all key staff members coming to a shared understanding of critical moves the institution should make and each person’s role in implementing the plan.