Date on Master's Thesis/Doctoral Dissertation

5-2014

Document Type

Doctoral Dissertation

Degree Name

Ph. D.

Department (Legacy)

Department of Leadership, Foundations, and Human Resource Education

Committee Chair

Choi, Namok

Committee Co-Chair (if applicable)

Larson, Ann E.

Committee Member

Larson, Ann E.

Committee Member

Gaus, Donna

Committee Member

Goldstein, Robert

Subject

Teachers--Training of--Evaluation; Education--Evaluation

Abstract

Metaevaluation is the evaluation of an evaluation or evaluation system (Scriven, 1969). It serves as a mechanism to ensure quality in evaluation approaches and implementation. Operationally metaevaluation is defined as “the process of delineating, obtaining, and applying descriptive information and judgmental information – about the utility, feasibility, propriety, and accuracy of an evaluation and its systematic nature, competent conduct, integrity/honesty, respectfulness, and social responsibility to guide the evaluation and/or report its strengths and weaknesses” (Stufflebeam, 2001, p.185). This study was a metaevaluation of an assessment system designed for accreditation requirements to support continuous improvement in teacher education programs at the University of Louisville. The study was intended to serve as a formative metaevaluation to identify strengths and weaknesses in the University of Louisville, College of Education and Human Development’s (CEHD) teacher education assessment system to support improvement of the system and better support continuous improvement of teacher education programs. The study took careful consideration of accountability and accreditation requirements, as well as evaluation and metaevalaution standards and practices. The study utilized Stufflebeam’s structure for metaevaluation (2001), which supports strategic and contextual analysis of the evaluation or evaluation system to address alignment with stakeholders needs. The study employed mixed methods to address four research questions. The research questions were focused on the application of data from the CEHD’s assessment system in driving program improvement and also the reliability and validity of instruments used in the assessment system. The first research question was focused on identifying the types of assessments that best support program improvement in teacher education. A qualitative case study analysis revealed a lack of explicit connections to data within the CEHD’s SLO action plans in which faculty identify plans for improving programs. Implied connections to data, included references to the 10 Unit Key Assessments, Hallmark Assessment Tasks (HATs), and indirect assessment data (QMS student satisfaction survey data. These results indicate that a variety of assessments support program improvement and are in alignment with CAEP standards (2013), the American Evaluation Association (2013), and the Joint Committee on Standards for Educational Evaluation (2011), multiple measures are necessary in sound evaluation and evaluation systems. This study resulted in recommendations to modify SLO templates and action plan prompts to ensure more explicit connections of data to the action plans and even follow-through on action plans. The second research question was intended to identify how assessment data are used to drive continuous improvement in teacher education programs. The qualitative case study review of SLO action plans and reflections on previous year’s plans for improvement identified actions in the area of curriculum, faculty development, assessments, field and clinical experiences, and candidate performance. These findings demonstrated a real strength of the CEHD’s assessment system, as it demonstrates that the assessment system is driving continuous program improvement. One suggestion for improvement was increased documentation related to follow-through of actions within the current assessment system structures. The third research question pertained to reliability of instruments used across programs. The analysis revealed no concerns in regards to reliability of instruments across programs. The CEHD is encouraged to incorporate continued training and collaborative sessions to dissect and practice application of instruments to ensure reliability over time. This is especially important as programs revise instruments, assessors matriculate, and assessment context changes. The fourth and final research questions reviewed the construct validity of instruments in the CEHD assessment system aligned with the CEHD’s conceptual framework. The study revealed adequate construct validity related to measuring critical thinking, problem solving, and professional leadership, however also revealed potential concerns regarding discriminant validity. To address these findings, it has been recommended that the CEHD transition to 4-point rubrics instead of the current 3-point rubrics used in the assessment system. The study has outlined next steps in making that transition. In conclusion, this study identified strengths in the reliability of instrumentation and strategic application of data. Areas for improvement include revision of instruments to provide differentiation between performance levels and outcomes in the assessment system and revisions to SLO processes and templates to ensure more explicit connections between data and decision making. Ultimately, this metaevaluation has identified the most pertinent next steps for CEHD administrators, faculty, and staff in improving the assessment system to drive continuous program improvement in alignment with the Council for the Accreditation of Educator Preparation (CAEP) and the Kentucky Education Professional Standards Board (EPSB) accreditation processes.

Included in

Education Commons

Share

COinS