In the 1970’s a group of educators developed the “effective schools” model. The model identified six characteristics of schools that correlated with instructional programs that were improving or declining. One of the characteristics listed was “frequent monitoring of student progress,” or, in the vocabularies of that day: “what gets measured, gets done.”
At that time, the effective schools model interpreted the “monitoring of student progress,” as teaching models that employed a variety of feedback tools to assess student understanding. In the nineties, however, the “monitoring of student progress” morphed into “what gets tested, gets done.” The transformation of the feedback function from formative assessment to summative assessment was in direct response to business models claiming that private sector methods of continuous improvement and total quality management could be adapted to educational settings. Among those private sector tools, the use of data to drive quality services and products, redefined the “frequent monitoring of student progress,” into “data driven instructional programs.”
In today’s schools, entire office suites are populated by all manner of data management specialists, who, turn out mountains of data on every conceivable component of a school’s educational program. You cannot attend a conference of professional administrators without the bulk of the program being devoted to gathering data, recording data, tracking data, and analyzing data. The remaining program options offer administrators a variety of managerial tools—programs, templates, plans—to implement all the data flowing into main offices.
Putting aside the reality that in most central and main offices, most administrators neither have the time nor expertise to make sense out of all the data pouring in their in-boxes. Even if administrators had that time and the academic background to make sense of the data in their in-boxes, rarely, if ever, are administrators asking fundamental questions about what that data is measuring and the processes that generate the data.
What are we measuring?
The assumption made by school administrators and the public is that a number or set of numbers published in district handbooks, school report cards, or various media outlets, reflects the achievement of the educational goals written into school mission statements or achieves a national standard published by professional organizations.
A common educational goal written into mission statements and state standards, for example, is critical thinking. Among the numerous qualities embodied in this educational goal is the ability to “evaluate evidence, “to “reason from evidence,” and to “apply substantive concepts to solve open-ended problems.” Although most educators would shake their head in agreement with these qualities, few in the same community would agree on how these qualities are defined across subject matter fields, or, more importantly, the different meanings these qualities assume in the occupational world. The lack of agreed upon indicators of complex student learning—critical thinking—renders meaningless all the numbers buried in central office data sources.
The inability to agree upon definitions of what we are measuring in classrooms becomes more muddled when we examine the various methods we employ to gather data:
Method #1: Standardized Tests
Although standardized tests are inexpensive, convenient, and perceived as valid indicators of student learning, researchers have found moderate correlation between test scores and complex educational goals.
Method #2: Classroom Observations
Although detailed descriptions of classroom lessons would appear to be an accurate assessment of a classroom teaching model, researchers have been unable to determine what particular teaching techniques correlate with the educational goals listed in school mission statements. Added to the problem of cause and effect, researchers have found that what administrators observe in classrooms says more about an administrator’s favored teaching model and very little about the effectiveness of the pedagogy they are observing.
Method #3: Teacher Surveys
Although self-assessment of job performance—reflective thinking—is considered a worthy professional goal, researchers have found a slippage between teacher’s exposed ideas about teaching and learning and how they actually teach in classrooms.
Method #4: Testimonies
Although testimonies from teachers regarding the helpfulness of a particular policy or program maybe an effective public relations technique, researchers have found large gaps between changes teachers state they have made as a result of exposure to a new teaching model and what they are actually doing in classrooms.
Method #5: Valued Added Scores
Although the development of mathematical algorithms that profess to isolate an individual’s teacher’s contribution to student learning may appear to place a number on teacher effectiveness, researchers have found that VAM results are unstable over time, subject to bias and imprecision, and rely solely on results from standardized tests that were not designed for that purpose.
The problem with approximations
I could continue to list the managerial tools that school administrators are employing to measure student learning, to judge teacher performance, or to rank school quality. As the list above already explains, at best these measures of school performance are weak approximations of relationships between organizational and instructional configurations and student performance. At their worse, main offices design organizational structures and instructional regimes that will generate a particular number that the public believes measures educational quality—what gets measured gets done.
What then does it mean when a school is doing well?
Given the difficulties with finding a number that would accurately quantify the approximate relationships between an instructional regime and student learning, what, then, is a good school. The answer to that question will not be found in the search for another number, or algorithm, or survey, or tests. It will be found in how school administrators answer the questions posed below by Elliot Eisner.
I know exactly what school administrators are saying at this point: “Yes, these are certainly mission driven questions, but, how would I quantify the answers, and, realistically, how would I gather the data on these questions. Within the margins of institutional schooling, school administrators are correct in saying they lack the managerial tools to quantify or gather data on the answers to any of these questions.
Becoming an Educational Connoisseur
Although these questions are unanswerable using established managerial accountability tools, they are answerable administrators assume and develop the role of educational connoisseur. I will elaborate on this role in coming blog posts. Suffice it to say now, that educational connoisseurs develop a fine sense for the subtler forms of classroom instruction. An educational connoisseur, for example, would be able to detect patterns of teaching where students are able to “formulate their own purposes,” or “work in depth in domains related to their aptitudes,” or “participate in the assessment of their own work.”
As already noted, no institutional accountability tool exists to document these subtler forms of classroom instruction. What does exist, however, are venues and discourses where these forms of classroom instruction can be observed and discussed. While these observations and discussions may be unable to be placed in an employee’s file, they will, over time, define a school’s instructional worldview, and more importantly, will become the normative model of teaching.
| WHAT IS A GOOD SCHOOL? |
(Eisner, E. W. (January 01, 2001). FEATURES –
What Does It Mean to Say a School Is Doing Well?. Phi Delta Kappan, 82, 5, 367)
1. WHAT KINDS OF PROBLEMS AND ACTIVITIES DO STUDENTS ENGAGE IN?
2. WHAT IS THE INTELLECTUAL SIGNIFICANCE OF THE IDEAS THAT THEY ENCOUNTER?
3. ARE STUDENTS INTRODUCED TO MULTIPLE PERSPECTIVES?
4. WHAT CONNECTIONS ARE STUDENTS HELPED TO MAKE BETWEEN WHAT THEY STUDY IN CLASS AND THE WORLD OUTSIDE OF SCHOOL?
5. WHAT OPPORTUNITIES DO YOUNGSTERS HAVE TO BECOME LITERATE IN THE USE OF DIFFERENT REPRESENTATIONS FORMS (i.e. various symbol systems which give humans meaning)?
6. WHAT OPPORTUNITIES DO STUDENTS HAVE TO FORMULATE THEIR OWN PURPOSES AND TO DESIGN WAYS TO ACHIEVE THEM?
7. WHAT OPPORTUNITIES TO STUDENTS HAVE TO WORK COOPERATIVELY TO ADDRESS PROBLEMS THAT THEY BELIEVE TO BE IMPORTANT?
8. DO STUDENTS HAVE THE OPPORTUNITY TO SERVE THE COMMUNITY IN WAYS THAT ARE NOT LIMITED TO THEIR OWN PERSONAL INTERESTS?
9. TO WHAT EXTENT ARE STUDENTS GIVEN THE OPPORTUNITY TO WORK IN DEPTH IN DOMAINS THAT RELATED TO THEIR APTITUDES?
10. DO STUDENTS PARTICIPATE IN THE ASSESSMENT OF THEIR OWN WORK?
11. DO WHAT EXTENT ARE STUDENTS GENUINELY ENGAGED IN WHAT THEY DO IN SCHOOL?