This study investigates the reliability and validity of an instrument designed to measure science and mathematics teachers' strategic knowledge . Strategic knowledge is conceptualized as a construct that is related to pedagogical knowledge and is comprised of two dimensions: Flexible Application (FA) and Student Centered Instruction (SCI). The FA dimension describes how a science teacher invokes, applies and modifies her instructional repertoire in a given teaching context. The SCI dimension describes how a science teacher conceives of a given situation as an opportunity for active engagement with the students. The Flexible Application of Student-Centered Instruction (FASCI) survey instrument was designed to measure science teachers' strategic knowledge by eliciting open-ended responses to scenario-based items. This study addresses the following overarching question: What are some potential issues pertaining to the validity of measures of science and mathematics teacher knowledge? Using a validity argument framework, different sources of evidence are identified, collected, and evaluated to examine support for a set or propositions related to the intended score interpretation and instrument use: FASCI scores can be used to compare and distinguish the strategic knowledge of novice science and mathematics teachers in the evaluation of teacher education programs. Three separate but related studies are presented and discussed. These studies focus on the reliability of FASCI scores, the effect of adding specific science content to the scenario-based items, and the observation of strategic knowledge in teaching practice. Serious issues were found with the reliability of scores from the FASCI instrument. It was also found that adding science content to the scenario-based items has an effect on FASCI scores, but not for the reason hypothesized. Finally, it was found that more evidence is needed to make stronger claims about the relationship between FASCI scores and novice teachers' practice. In concluding this work, a set of four recommendations are presented for others who are engaged in similar measure development efforts. These recommendations focus on the areas of construct definition, item design and development, rater recruitment and training, and the validation process.
|Advisor:||Briggs, Derek C., Otero, Valerie K.|
|Commitee:||Furtak, Erin, Pollock, Steven, Webb, David|
|School:||University of Colorado at Boulder|
|School Location:||United States -- Colorado|
|Source:||DAI-A 72/11, Dissertation Abstracts International|
|Subjects:||Mathematics education, Educational tests & measurements, Educational evaluation, Science education|
|Keywords:||Science and math teacher knowledge, Science teacher education, Score reliability, Strategic knowledge, Teacher evaluation, Teacher knowledge, Validity|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be