BranchED Equity Rubric for OER Evaluation

Instructions for Rubric Use

The BranchED Equity Rubric for OER is designed to be used by educator preparation (EPP) faculty. It is organized around four broad dimensions of equity: Learner-Centered, Critical, Culturally Sustaining, and Universally Designed for Learning (UDL) which are color-coded within the document. 

These four equity dimensions are broken down further into criteria, which are then measured through leveled indicators. The indicators are described using vocabulary specific to equity in education, the definitions of which are important to making a reliable and consistent assessment. For this reason, there is a linked glossary in Appendix A which includes the definitions to be used for the purpose of applying the rubric. The rubric measures four levels of evidence for each criterion ranging from Not Observed (0) to High (3).[1] Additionally, “look fors” offer examples of specific evidence to support the identification of each indicator. Screen tips for all look fors and glossary definitions may be accessed by hovering over their respective links in the rubric. Clicking on each link will take the user to the item in the full list of look fors or the glossary.

After the criteria are evaluated for a dimension, an earned score (out of a total possible score) can be calculated and recorded for that dimension. Upon completion of this process for the fourth equity dimension, the user can add the four earned dimension-level scores to obtain an overall equity score for the resource. 

We recommend that users take sufficient time to familiarize themselves with the rubric before employing it to evaluate resources. While an individual user can apply the rubric to the resources, we advocate that more than one rater from the same institution evaluate selected resources, obtain individual scores, confer to achieve consensus scores and then calculate the inter-rater reliability using Krippendorff’s alpha or another suitable measure of reliability for ordinal data, such as intra-class correlations (ICCs), Gwet’s AC2, or the Kendall rank correlation coefficient (also known as Kendall's τ coefficient).[1] More information about calculating Krippendorff’s alpha can be found here: https://www.statisticshowto.com/krippendorffs-alpha/.

[1] Cohen’s kappa and its extension for more than two raters, Fleiss’s kappa, are designed for nominal (categorical) data, and therefore would not be appropriate for the ordinal data generated through the use of this rubric unless their weighted variants are used.

 

[1] These levels of evidence correspond to traditional rubric performance levels in order to maintain the integrity of the rubric as a “pro asset-based’ evaluative instrument.