I have been involved in choosing instructional materials for a long time. The first time I looked at our choices for publisher-produced math materials, it seemed like we just looked at the ‘stuff’ and made recommendations based on all of the ‘bells and whistles,’ determining that we could just keep doing what we had been doing previously, but with different textbooks. Fortunately, I have learned what to look for and, as a bonus, what a powerful professional learning experience a good review process can be!
In 2013, I was involved in the instructional materials review component of the California Instructional Materials Adoption Process and, in 2014, I helped review one publisher’s 7th and 8th grade math materials for my school district. Most recently, I have been trained in using the IMET and have used components of the tool in evaluating our 8th grade math materials developed in partnership with the California Math Project at University of California at Irvine.
As I compare my review experiences, I see strengths and similarities, but also important differences. Both review tools I have used have roots in the Publishers’ Criteria documents, which were designed by the Standards’ authors to help publishers understand the requirements of Common Core-aligned instructional materials. Users of both tools are trained using sample materials, learning to carefully keep a record of evidence of alignment or missed opportunities for alignment. While both of the tools can inform decisions about purchasing materials for classroom use, the IMET — which I’ve applied in my work with the Instructional Materials Taskforce — can also be utilized to evaluate previously purchased materials by helping users identify necessary modifications while building an understanding of what aligned materials look like. The IMET can also help in the creation of new, aligned curricula.
The differences between reviews processes arose when considering how we would conduct the review itself. Where my previous review experiences had us evaluate standard-by-standard, the IMET training is focused on the reviewer gaining a deep understanding of the instructional Shifts of the Common Core which form the backbone of the tool itself. Reviewers are trained to identify evidence of the Shifts in materials, which is a complex task. IMET reviewing teams may work together during the actual review process, although they don’t have to. In comparison, the California process training is an overview of how to use the given materials (i.e. Publishers’ Criteria, California Frameworks, publisher instructional materials, and publisher-provided Criteria and Standards maps) and cite evidence of sufficient (or insufficient) alignment to each Standard and criterion. During the actual review, the reviewers evaluate the materials independently over several months. When the group reconvenes for the facilitated deliberations, the reviewers’ citations are presented as evidence of alignment, or not.
I see both types of reviews as having value. The IMET has a stronger emphasis of the nuances of the Common Core Shifts, while the California process focuses on comparing individual standards to curricular content. Both processes are powerful tools for reviewers to learn more about the Common Core due, in large part, to the attention that is paid to citing evidence for reviewers’ opinions about alignment and listening to peers’ opinions and explanations.