For generations—and certainly for the last 30 years—proponents of traditional and progressive philosophies have argued over how best to educate our children. Although this debate is often carried out in the political and academic spheres, the difficulties created by not being able to resolve the differences between the two belief systems become blatantly clear in the pedagogy of early literacy. On the one hand, traditionalists argue for a direct and explicit instructional methodology, and on the other hand, progressives advocate for Whole Language or Balanced Literacy instruction. The classroom often becomes a battlefield as advocates of these opposing schooling paradigms struggle with each other. Differences emerge about which skills and what knowledge are the most important for students to master. Conflicts arise over which methodology is most effective in ensuring that students gain access to bodies of knowledge. The result is that the real world of classroom instruction often becomes a mish-mash of content and strategies that derive from both philosophies. Student assessments frequently contribute to the confusion because they are not aligned with the knowledge and skills students are expected to acquire as well as with the strategies teachers use. Without assessments that are tightly coupled with the underlying philosophy of an instructional program, with classroom practice, and with high-stakes summative assessments, it is extremely difficult for both teachers and administrators to have confidence that they are offering their students the best possible learning opportunities.
Interim/benchmark assessments are vital tools for linking classroom instruction with year-end assessments and an essential element of any comprehensive assessment system. Currently, the Dynamic Indicator of Beginning Early Literacy Skills, commonly referred to as DIBELS, is a widely used interim/benchmark assessment. It serves many districts and schools quite well. However, many progressive educators believe that the DIBELS assessment is not well-aligned with a Balanced Literacy approach. In this dissertation the author examines the following essential question about early literacy interim/benchmark assessments: (a) Is the relationship between the assessed level on the Developmental Reading Assessment (DRA), which fits within a Balanced Literacy framework, and student’s performance on high stakes accountability test as strong as the relationship of DIBELS to these same tests; and (b) does the DRA have a degree of predictive validity comparable to DIBELS?
The study demonstrated a strong relationship between the DRA and performance on OAKS and that the DRA has a degree of predictive validity that is comparable to DIBELS. The results from the study support the claim that a curriculum-based measure, such as the DRA, can be used as a literacy screening assessment to detect potential reading difficulties. These results give support to progressive educators who wish to have a viable alternative DIBELS.
|Commitee:||Burk, Pat, Conrad, Susan, Isaacson, Steve, Ranker, Jason|
|School:||Portland State University|
|School Location:||United States -- Oregon|
|Source:||DAI-A 71/10, Dissertation Abstracts International|
|Subjects:||Educational tests & measurements, Education Policy, Literacy, Reading instruction|
|Keywords:||Assessment, Balanced literacy, Conguency, DRA, Interim assessment, Progressive education|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
supplemental files is subject to the ProQuest Terms and Conditions of use.